00:00:00.001 Started by upstream project "autotest-nightly" build number 4312 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3675 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.106 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.107 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.187 > git --version # 'git version 2.39.2' 00:00:00.187 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.227 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.227 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.100 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.113 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.125 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.125 > git config core.sparsecheckout # timeout=10 00:00:05.136 > git read-tree -mu HEAD # timeout=10 00:00:05.151 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.177 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.177 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.283 [Pipeline] Start of Pipeline 00:00:05.295 [Pipeline] library 00:00:05.296 Loading library shm_lib@master 00:00:05.296 Library shm_lib@master is cached. Copying from home. 00:00:05.310 [Pipeline] node 00:00:05.337 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.338 [Pipeline] { 00:00:05.344 [Pipeline] catchError 00:00:05.345 [Pipeline] { 00:00:05.354 [Pipeline] wrap 00:00:05.359 [Pipeline] { 00:00:05.365 [Pipeline] stage 00:00:05.366 [Pipeline] { (Prologue) 00:00:05.380 [Pipeline] echo 00:00:05.382 Node: VM-host-SM38 00:00:05.387 [Pipeline] cleanWs 00:00:05.397 [WS-CLEANUP] Deleting project workspace... 00:00:05.397 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.404 [WS-CLEANUP] done 00:00:05.657 [Pipeline] setCustomBuildProperty 00:00:05.746 [Pipeline] httpRequest 00:00:06.091 [Pipeline] echo 00:00:06.093 Sorcerer 10.211.164.20 is alive 00:00:06.102 [Pipeline] retry 00:00:06.104 [Pipeline] { 00:00:06.118 [Pipeline] httpRequest 00:00:06.124 HttpMethod: GET 00:00:06.125 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.126 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.133 Response Code: HTTP/1.1 200 OK 00:00:06.134 Success: Status code 200 is in the accepted range: 200,404 00:00:06.134 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.319 [Pipeline] } 00:00:08.334 [Pipeline] // retry 00:00:08.340 [Pipeline] sh 00:00:08.622 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.634 [Pipeline] httpRequest 00:00:08.957 [Pipeline] echo 00:00:08.958 Sorcerer 10.211.164.20 is alive 00:00:08.966 [Pipeline] retry 00:00:08.968 [Pipeline] { 00:00:08.976 [Pipeline] httpRequest 00:00:08.980 HttpMethod: GET 00:00:08.981 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:08.982 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:09.025 Response Code: HTTP/1.1 200 OK 00:00:09.025 Success: Status code 200 is in the accepted range: 200,404 00:00:09.026 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:02.830 [Pipeline] } 00:01:02.846 [Pipeline] // retry 00:01:02.853 [Pipeline] sh 00:01:03.139 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:05.690 [Pipeline] sh 00:01:05.974 + git -C spdk log --oneline -n5 00:01:05.974 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:05.974 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:05.974 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:05.974 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:05.974 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:05.995 [Pipeline] writeFile 00:01:06.011 [Pipeline] sh 00:01:06.297 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:06.310 [Pipeline] sh 00:01:06.594 + cat autorun-spdk.conf 00:01:06.594 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.594 SPDK_TEST_NVME=1 00:01:06.594 SPDK_TEST_FTL=1 00:01:06.594 SPDK_TEST_ISAL=1 00:01:06.594 SPDK_RUN_ASAN=1 00:01:06.594 SPDK_RUN_UBSAN=1 00:01:06.594 SPDK_TEST_XNVME=1 00:01:06.594 SPDK_TEST_NVME_FDP=1 00:01:06.594 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.601 RUN_NIGHTLY=1 00:01:06.603 [Pipeline] } 00:01:06.615 [Pipeline] // stage 00:01:06.630 [Pipeline] stage 00:01:06.633 [Pipeline] { (Run VM) 00:01:06.646 [Pipeline] sh 00:01:06.930 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:06.930 + echo 'Start stage prepare_nvme.sh' 00:01:06.930 Start stage prepare_nvme.sh 00:01:06.930 + [[ -n 0 ]] 00:01:06.930 + disk_prefix=ex0 00:01:06.930 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:06.930 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:06.930 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:06.930 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.930 ++ SPDK_TEST_NVME=1 00:01:06.930 ++ SPDK_TEST_FTL=1 00:01:06.930 ++ SPDK_TEST_ISAL=1 00:01:06.930 ++ SPDK_RUN_ASAN=1 00:01:06.930 ++ SPDK_RUN_UBSAN=1 00:01:06.930 ++ SPDK_TEST_XNVME=1 00:01:06.930 ++ SPDK_TEST_NVME_FDP=1 00:01:06.930 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.930 ++ RUN_NIGHTLY=1 00:01:06.930 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:06.930 + nvme_files=() 00:01:06.930 + declare -A nvme_files 00:01:06.930 + backend_dir=/var/lib/libvirt/images/backends 00:01:06.930 + nvme_files['nvme.img']=5G 00:01:06.930 + nvme_files['nvme-cmb.img']=5G 00:01:06.930 + nvme_files['nvme-multi0.img']=4G 00:01:06.930 + nvme_files['nvme-multi1.img']=4G 00:01:06.930 + nvme_files['nvme-multi2.img']=4G 00:01:06.930 + nvme_files['nvme-openstack.img']=8G 00:01:06.930 + nvme_files['nvme-zns.img']=5G 00:01:06.930 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:06.930 + (( SPDK_TEST_FTL == 1 )) 00:01:06.930 + nvme_files["nvme-ftl.img"]=6G 00:01:06.930 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:06.930 + nvme_files["nvme-fdp.img"]=1G 00:01:06.930 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:06.930 + for nvme in "${!nvme_files[@]}" 00:01:06.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:06.930 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.930 + for nvme in "${!nvme_files[@]}" 00:01:06.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:06.930 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:06.930 + for nvme in "${!nvme_files[@]}" 00:01:06.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:06.930 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.930 + for nvme in "${!nvme_files[@]}" 00:01:06.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:06.930 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:06.930 + for nvme in "${!nvme_files[@]}" 00:01:06.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:07.503 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.503 + for nvme in "${!nvme_files[@]}" 00:01:07.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:07.503 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.503 + for nvme in "${!nvme_files[@]}" 00:01:07.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:07.503 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.764 + for nvme in "${!nvme_files[@]}" 00:01:07.764 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:07.764 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:07.764 + for nvme in "${!nvme_files[@]}" 00:01:07.764 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:08.338 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.338 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:08.338 + echo 'End stage prepare_nvme.sh' 00:01:08.338 End stage prepare_nvme.sh 00:01:08.353 [Pipeline] sh 00:01:08.638 + DISTRO=fedora39 00:01:08.638 + CPUS=10 00:01:08.638 + RAM=12288 00:01:08.638 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:08.638 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:08.638 00:01:08.638 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:08.638 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:08.638 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:08.638 HELP=0 00:01:08.638 DRY_RUN=0 00:01:08.638 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:08.638 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:08.638 NVME_AUTO_CREATE=0 00:01:08.638 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:08.638 NVME_CMB=,,,, 00:01:08.638 NVME_PMR=,,,, 00:01:08.638 NVME_ZNS=,,,, 00:01:08.638 NVME_MS=true,,,, 00:01:08.638 NVME_FDP=,,,on, 00:01:08.638 SPDK_VAGRANT_DISTRO=fedora39 00:01:08.638 SPDK_VAGRANT_VMCPU=10 00:01:08.638 SPDK_VAGRANT_VMRAM=12288 00:01:08.638 SPDK_VAGRANT_PROVIDER=libvirt 00:01:08.638 SPDK_VAGRANT_HTTP_PROXY= 00:01:08.638 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:08.638 SPDK_OPENSTACK_NETWORK=0 00:01:08.638 VAGRANT_PACKAGE_BOX=0 00:01:08.638 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:08.638 FORCE_DISTRO=true 00:01:08.638 VAGRANT_BOX_VERSION= 00:01:08.638 EXTRA_VAGRANTFILES= 00:01:08.638 NIC_MODEL=e1000 00:01:08.638 00:01:08.638 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:08.639 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:11.185 Bringing machine 'default' up with 'libvirt' provider... 00:01:11.446 ==> default: Creating image (snapshot of base box volume). 00:01:11.446 ==> default: Creating domain with the following settings... 00:01:11.446 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732786430_525da78e38f4bba7e9e0 00:01:11.446 ==> default: -- Domain type: kvm 00:01:11.446 ==> default: -- Cpus: 10 00:01:11.446 ==> default: -- Feature: acpi 00:01:11.446 ==> default: -- Feature: apic 00:01:11.446 ==> default: -- Feature: pae 00:01:11.447 ==> default: -- Memory: 12288M 00:01:11.447 ==> default: -- Memory Backing: hugepages: 00:01:11.447 ==> default: -- Management MAC: 00:01:11.447 ==> default: -- Loader: 00:01:11.447 ==> default: -- Nvram: 00:01:11.447 ==> default: -- Base box: spdk/fedora39 00:01:11.447 ==> default: -- Storage pool: default 00:01:11.447 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732786430_525da78e38f4bba7e9e0.img (20G) 00:01:11.447 ==> default: -- Volume Cache: default 00:01:11.447 ==> default: -- Kernel: 00:01:11.447 ==> default: -- Initrd: 00:01:11.447 ==> default: -- Graphics Type: vnc 00:01:11.447 ==> default: -- Graphics Port: -1 00:01:11.447 ==> default: -- Graphics IP: 127.0.0.1 00:01:11.447 ==> default: -- Graphics Password: Not defined 00:01:11.447 ==> default: -- Video Type: cirrus 00:01:11.447 ==> default: -- Video VRAM: 9216 00:01:11.447 ==> default: -- Sound Type: 00:01:11.447 ==> default: -- Keymap: en-us 00:01:11.447 ==> default: -- TPM Path: 00:01:11.447 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:11.447 ==> default: -- Command line args: 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:11.447 ==> default: -> value=-drive, 00:01:11.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:11.447 ==> default: -> value=-drive, 00:01:11.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:11.447 ==> default: -> value=-drive, 00:01:11.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:11.447 ==> default: -> value=-drive, 00:01:11.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:11.447 ==> default: -> value=-drive, 00:01:11.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:11.447 ==> default: -> value=-drive, 00:01:11.447 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:11.447 ==> default: -> value=-device, 00:01:11.447 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:11.707 ==> default: Creating shared folders metadata... 00:01:11.707 ==> default: Starting domain. 00:01:13.623 ==> default: Waiting for domain to get an IP address... 00:01:31.781 ==> default: Waiting for SSH to become available... 00:01:31.781 ==> default: Configuring and enabling network interfaces... 00:01:33.696 default: SSH address: 192.168.121.179:22 00:01:33.696 default: SSH username: vagrant 00:01:33.696 default: SSH auth method: private key 00:01:36.262 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.405 ==> default: Mounting SSHFS shared folder... 00:01:45.791 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.791 ==> default: Checking Mount.. 00:01:46.733 ==> default: Folder Successfully Mounted! 00:01:46.733 00:01:46.733 SUCCESS! 00:01:46.733 00:01:46.733 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:46.733 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.733 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:46.733 00:01:46.742 [Pipeline] } 00:01:46.755 [Pipeline] // stage 00:01:46.762 [Pipeline] dir 00:01:46.763 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:46.764 [Pipeline] { 00:01:46.774 [Pipeline] catchError 00:01:46.776 [Pipeline] { 00:01:46.787 [Pipeline] sh 00:01:47.072 + vagrant ssh-config --host vagrant 00:01:47.072 + sed -ne '/^Host/,$p' 00:01:47.072 + tee ssh_conf 00:01:49.620 Host vagrant 00:01:49.620 HostName 192.168.121.179 00:01:49.620 User vagrant 00:01:49.620 Port 22 00:01:49.620 UserKnownHostsFile /dev/null 00:01:49.620 StrictHostKeyChecking no 00:01:49.620 PasswordAuthentication no 00:01:49.620 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:49.620 IdentitiesOnly yes 00:01:49.620 LogLevel FATAL 00:01:49.620 ForwardAgent yes 00:01:49.620 ForwardX11 yes 00:01:49.620 00:01:49.637 [Pipeline] withEnv 00:01:49.641 [Pipeline] { 00:01:49.660 [Pipeline] sh 00:01:49.952 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:49.952 source /etc/os-release 00:01:49.952 [[ -e /image.version ]] && img=$(< /image.version) 00:01:49.952 # Minimal, systemd-like check. 00:01:49.952 if [[ -e /.dockerenv ]]; then 00:01:49.952 # Clear garbage from the node'\''s name: 00:01:49.952 # agt-er_autotest_547-896 -> autotest_547-896 00:01:49.952 # $HOSTNAME is the actual container id 00:01:49.952 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:49.952 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:49.952 # We can assume this is a mount from a host where container is running, 00:01:49.952 # so fetch its hostname to easily identify the target swarm worker. 00:01:49.952 container="$(< /etc/hostname) ($agent)" 00:01:49.952 else 00:01:49.952 # Fallback 00:01:49.952 container=$agent 00:01:49.952 fi 00:01:49.952 fi 00:01:49.952 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:49.952 ' 00:01:50.228 [Pipeline] } 00:01:50.246 [Pipeline] // withEnv 00:01:50.256 [Pipeline] setCustomBuildProperty 00:01:50.274 [Pipeline] stage 00:01:50.276 [Pipeline] { (Tests) 00:01:50.296 [Pipeline] sh 00:01:50.624 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:50.901 [Pipeline] sh 00:01:51.190 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.467 [Pipeline] timeout 00:01:51.467 Timeout set to expire in 50 min 00:01:51.469 [Pipeline] { 00:01:51.483 [Pipeline] sh 00:01:51.768 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:52.337 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:52.348 [Pipeline] sh 00:01:52.625 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:52.901 [Pipeline] sh 00:01:53.185 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.463 [Pipeline] sh 00:01:53.748 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:54.010 ++ readlink -f spdk_repo 00:01:54.010 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.010 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.010 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.010 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.010 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.010 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.010 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.010 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:54.010 + cd /home/vagrant/spdk_repo 00:01:54.010 + source /etc/os-release 00:01:54.010 ++ NAME='Fedora Linux' 00:01:54.010 ++ VERSION='39 (Cloud Edition)' 00:01:54.010 ++ ID=fedora 00:01:54.010 ++ VERSION_ID=39 00:01:54.010 ++ VERSION_CODENAME= 00:01:54.010 ++ PLATFORM_ID=platform:f39 00:01:54.010 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.010 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.010 ++ LOGO=fedora-logo-icon 00:01:54.010 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.010 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.010 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.010 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.010 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.010 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.010 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.010 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.010 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.010 ++ SUPPORT_END=2024-11-12 00:01:54.010 ++ VARIANT='Cloud Edition' 00:01:54.010 ++ VARIANT_ID=cloud 00:01:54.010 + uname -a 00:01:54.010 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.010 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:54.532 Hugepages 00:01:54.533 node hugesize free / total 00:01:54.533 node0 1048576kB 0 / 0 00:01:54.533 node0 2048kB 0 / 0 00:01:54.533 00:01:54.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:54.533 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:54.533 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:54.795 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:54.795 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:54.795 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:54.795 + rm -f /tmp/spdk-ld-path 00:01:54.795 + source autorun-spdk.conf 00:01:54.795 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.795 ++ SPDK_TEST_NVME=1 00:01:54.795 ++ SPDK_TEST_FTL=1 00:01:54.795 ++ SPDK_TEST_ISAL=1 00:01:54.795 ++ SPDK_RUN_ASAN=1 00:01:54.795 ++ SPDK_RUN_UBSAN=1 00:01:54.795 ++ SPDK_TEST_XNVME=1 00:01:54.795 ++ SPDK_TEST_NVME_FDP=1 00:01:54.795 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.795 ++ RUN_NIGHTLY=1 00:01:54.795 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:54.795 + [[ -n '' ]] 00:01:54.795 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:54.795 + for M in /var/spdk/build-*-manifest.txt 00:01:54.795 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:54.795 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.795 + for M in /var/spdk/build-*-manifest.txt 00:01:54.795 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:54.795 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.795 + for M in /var/spdk/build-*-manifest.txt 00:01:54.795 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:54.795 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.795 ++ uname 00:01:54.795 + [[ Linux == \L\i\n\u\x ]] 00:01:54.795 + sudo dmesg -T 00:01:54.795 + sudo dmesg --clear 00:01:54.795 + dmesg_pid=5035 00:01:54.795 + [[ Fedora Linux == FreeBSD ]] 00:01:54.795 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:54.795 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:54.795 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:54.795 + [[ -x /usr/src/fio-static/fio ]] 00:01:54.795 + sudo dmesg -Tw 00:01:54.795 + export FIO_BIN=/usr/src/fio-static/fio 00:01:54.795 + FIO_BIN=/usr/src/fio-static/fio 00:01:54.795 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:54.795 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:54.795 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:54.795 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:54.795 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:54.795 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:54.795 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:54.795 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:54.795 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:54.795 09:34:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:54.795 09:34:33 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.795 09:34:33 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:54.795 09:34:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:54.795 09:34:33 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.057 09:34:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:55.057 09:34:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.057 09:34:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.057 09:34:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.057 09:34:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.057 09:34:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.057 09:34:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.057 09:34:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.057 09:34:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.057 09:34:33 -- paths/export.sh@5 -- $ export PATH 00:01:55.057 09:34:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.057 09:34:33 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.057 09:34:33 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:55.057 09:34:33 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732786473.XXXXXX 00:01:55.057 09:34:33 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732786473.eeNhnk 00:01:55.057 09:34:33 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:55.057 09:34:33 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:55.057 09:34:33 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:55.057 09:34:33 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.057 09:34:33 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.057 09:34:33 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:55.057 09:34:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:55.057 09:34:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.057 09:34:33 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:55.057 09:34:33 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:55.057 09:34:33 -- pm/common@17 -- $ local monitor 00:01:55.057 09:34:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.057 09:34:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.057 09:34:33 -- pm/common@25 -- $ sleep 1 00:01:55.057 09:34:33 -- pm/common@21 -- $ date +%s 00:01:55.057 09:34:33 -- pm/common@21 -- $ date +%s 00:01:55.057 09:34:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732786473 00:01:55.057 09:34:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732786473 00:01:55.057 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732786473_collect-cpu-load.pm.log 00:01:55.057 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732786473_collect-vmstat.pm.log 00:01:56.002 09:34:34 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:56.002 09:34:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.002 09:34:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.002 09:34:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:56.002 09:34:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.002 Thu Nov 28 09:34:34 AM UTC 2024 00:01:56.002 09:34:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.002 v25.01-pre-276-g35cd3e84d 00:01:56.002 09:34:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.002 09:34:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.002 09:34:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.002 09:34:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.002 09:34:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.002 ************************************ 00:01:56.002 START TEST asan 00:01:56.002 ************************************ 00:01:56.002 using asan 00:01:56.002 09:34:34 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:56.002 00:01:56.002 real 0m0.000s 00:01:56.002 user 0m0.000s 00:01:56.002 sys 0m0.000s 00:01:56.002 09:34:34 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.002 ************************************ 00:01:56.002 END TEST asan 00:01:56.002 ************************************ 00:01:56.002 09:34:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.002 09:34:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.002 09:34:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.002 09:34:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.002 09:34:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.002 09:34:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.002 ************************************ 00:01:56.002 START TEST ubsan 00:01:56.002 ************************************ 00:01:56.002 using ubsan 00:01:56.002 ************************************ 00:01:56.002 END TEST ubsan 00:01:56.002 ************************************ 00:01:56.002 09:34:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:56.002 00:01:56.002 real 0m0.000s 00:01:56.002 user 0m0.000s 00:01:56.002 sys 0m0.000s 00:01:56.002 09:34:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.002 09:34:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.264 09:34:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:56.264 09:34:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:56.264 09:34:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:56.264 09:34:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:56.264 09:34:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:56.264 09:34:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:56.264 09:34:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:56.264 09:34:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:56.264 09:34:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:56.264 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:56.264 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:56.836 Using 'verbs' RDMA provider 00:02:10.045 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:20.051 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:20.051 Creating mk/config.mk...done. 00:02:20.051 Creating mk/cc.flags.mk...done. 00:02:20.051 Type 'make' to build. 00:02:20.051 09:34:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:20.051 09:34:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:20.051 09:34:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:20.051 09:34:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.051 ************************************ 00:02:20.051 START TEST make 00:02:20.051 ************************************ 00:02:20.051 09:34:57 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:20.051 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:20.051 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:20.051 meson setup builddir \ 00:02:20.051 -Dwith-libaio=enabled \ 00:02:20.051 -Dwith-liburing=enabled \ 00:02:20.051 -Dwith-libvfn=disabled \ 00:02:20.051 -Dwith-spdk=disabled \ 00:02:20.051 -Dexamples=false \ 00:02:20.051 -Dtests=false \ 00:02:20.051 -Dtools=false && \ 00:02:20.051 meson compile -C builddir && \ 00:02:20.051 cd -) 00:02:20.051 make[1]: Nothing to be done for 'all'. 00:02:21.430 The Meson build system 00:02:21.430 Version: 1.5.0 00:02:21.430 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:21.430 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:21.430 Build type: native build 00:02:21.430 Project name: xnvme 00:02:21.430 Project version: 0.7.5 00:02:21.430 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.430 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.430 Host machine cpu family: x86_64 00:02:21.430 Host machine cpu: x86_64 00:02:21.430 Message: host_machine.system: linux 00:02:21.430 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:21.430 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:21.430 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.430 Run-time dependency threads found: YES 00:02:21.430 Has header "setupapi.h" : NO 00:02:21.430 Has header "linux/blkzoned.h" : YES 00:02:21.430 Has header "linux/blkzoned.h" : YES (cached) 00:02:21.430 Has header "libaio.h" : YES 00:02:21.430 Library aio found: YES 00:02:21.430 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.430 Run-time dependency liburing found: YES 2.2 00:02:21.430 Dependency libvfn skipped: feature with-libvfn disabled 00:02:21.430 Found CMake: /usr/bin/cmake (3.27.7) 00:02:21.430 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:21.430 Subproject spdk : skipped: feature with-spdk disabled 00:02:21.430 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.430 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.430 Library rt found: YES 00:02:21.430 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:21.430 Configuring xnvme_config.h using configuration 00:02:21.430 Configuring xnvme.spec using configuration 00:02:21.430 Run-time dependency bash-completion found: YES 2.11 00:02:21.430 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:21.430 Program cp found: YES (/usr/bin/cp) 00:02:21.430 Build targets in project: 3 00:02:21.430 00:02:21.430 xnvme 0.7.5 00:02:21.430 00:02:21.430 Subprojects 00:02:21.430 spdk : NO Feature 'with-spdk' disabled 00:02:21.430 00:02:21.430 User defined options 00:02:21.430 examples : false 00:02:21.430 tests : false 00:02:21.430 tools : false 00:02:21.430 with-libaio : enabled 00:02:21.430 with-liburing: enabled 00:02:21.430 with-libvfn : disabled 00:02:21.430 with-spdk : disabled 00:02:21.430 00:02:21.430 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.995 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:21.995 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:21.995 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:21.995 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:21.995 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:21.995 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:21.995 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:21.995 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:21.995 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:21.995 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:21.995 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:21.995 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:21.995 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:21.995 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:21.995 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:22.253 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:22.253 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:22.253 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:22.253 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:22.253 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:22.253 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:22.253 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:22.253 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:22.253 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:22.253 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:22.253 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:22.253 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:22.253 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:22.253 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:22.253 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:22.253 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:22.253 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:22.253 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:22.253 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:22.253 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:22.253 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:22.253 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:22.253 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:22.253 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:22.253 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:22.253 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:22.253 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:22.253 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:22.253 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:22.253 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:22.253 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:22.253 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:22.253 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:22.253 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:22.253 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:22.253 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:22.253 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:22.253 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:22.253 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:22.510 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:22.510 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:22.510 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:22.510 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:22.510 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:22.510 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:22.510 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:22.510 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:22.510 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:22.510 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:22.510 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:22.510 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:22.510 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:22.510 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:22.510 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:22.510 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:22.510 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:22.510 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:22.799 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:22.799 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:22.799 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:22.799 [75/76] Linking static target lib/libxnvme.a 00:02:22.799 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:22.799 INFO: autodetecting backend as ninja 00:02:22.799 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:23.070 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:29.633 The Meson build system 00:02:29.633 Version: 1.5.0 00:02:29.633 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:29.633 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:29.633 Build type: native build 00:02:29.633 Program cat found: YES (/usr/bin/cat) 00:02:29.633 Project name: DPDK 00:02:29.633 Project version: 24.03.0 00:02:29.633 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:29.633 C linker for the host machine: cc ld.bfd 2.40-14 00:02:29.633 Host machine cpu family: x86_64 00:02:29.633 Host machine cpu: x86_64 00:02:29.633 Message: ## Building in Developer Mode ## 00:02:29.633 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.633 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:29.633 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.633 Program python3 found: YES (/usr/bin/python3) 00:02:29.633 Program cat found: YES (/usr/bin/cat) 00:02:29.633 Compiler for C supports arguments -march=native: YES 00:02:29.633 Checking for size of "void *" : 8 00:02:29.633 Checking for size of "void *" : 8 (cached) 00:02:29.633 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:29.633 Library m found: YES 00:02:29.633 Library numa found: YES 00:02:29.633 Has header "numaif.h" : YES 00:02:29.633 Library fdt found: NO 00:02:29.633 Library execinfo found: NO 00:02:29.633 Has header "execinfo.h" : YES 00:02:29.633 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:29.633 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.634 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.634 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.634 Run-time dependency openssl found: YES 3.1.1 00:02:29.634 Run-time dependency libpcap found: YES 1.10.4 00:02:29.634 Has header "pcap.h" with dependency libpcap: YES 00:02:29.634 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.634 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.634 Compiler for C supports arguments -Wformat: YES 00:02:29.634 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.634 Compiler for C supports arguments -Wformat-security: NO 00:02:29.634 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.634 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.634 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.634 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.634 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.634 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.634 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.634 Compiler for C supports arguments -Wundef: YES 00:02:29.634 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.634 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.634 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.634 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.634 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.634 Program objdump found: YES (/usr/bin/objdump) 00:02:29.634 Compiler for C supports arguments -mavx512f: YES 00:02:29.634 Checking if "AVX512 checking" compiles: YES 00:02:29.634 Fetching value of define "__SSE4_2__" : 1 00:02:29.634 Fetching value of define "__AES__" : 1 00:02:29.634 Fetching value of define "__AVX__" : 1 00:02:29.634 Fetching value of define "__AVX2__" : 1 00:02:29.634 Fetching value of define "__AVX512BW__" : 1 00:02:29.634 Fetching value of define "__AVX512CD__" : 1 00:02:29.634 Fetching value of define "__AVX512DQ__" : 1 00:02:29.634 Fetching value of define "__AVX512F__" : 1 00:02:29.634 Fetching value of define "__AVX512VL__" : 1 00:02:29.634 Fetching value of define "__PCLMUL__" : 1 00:02:29.634 Fetching value of define "__RDRND__" : 1 00:02:29.634 Fetching value of define "__RDSEED__" : 1 00:02:29.634 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:29.634 Fetching value of define "__znver1__" : (undefined) 00:02:29.634 Fetching value of define "__znver2__" : (undefined) 00:02:29.634 Fetching value of define "__znver3__" : (undefined) 00:02:29.634 Fetching value of define "__znver4__" : (undefined) 00:02:29.634 Library asan found: YES 00:02:29.634 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.634 Message: lib/log: Defining dependency "log" 00:02:29.634 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.634 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.634 Library rt found: YES 00:02:29.634 Checking for function "getentropy" : NO 00:02:29.634 Message: lib/eal: Defining dependency "eal" 00:02:29.634 Message: lib/ring: Defining dependency "ring" 00:02:29.634 Message: lib/rcu: Defining dependency "rcu" 00:02:29.634 Message: lib/mempool: Defining dependency "mempool" 00:02:29.634 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.634 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.634 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.634 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.634 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.634 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.634 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:29.634 Compiler for C supports arguments -mpclmul: YES 00:02:29.634 Compiler for C supports arguments -maes: YES 00:02:29.634 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.634 Compiler for C supports arguments -mavx512bw: YES 00:02:29.634 Compiler for C supports arguments -mavx512dq: YES 00:02:29.634 Compiler for C supports arguments -mavx512vl: YES 00:02:29.634 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.634 Compiler for C supports arguments -mavx2: YES 00:02:29.634 Compiler for C supports arguments -mavx: YES 00:02:29.634 Message: lib/net: Defining dependency "net" 00:02:29.634 Message: lib/meter: Defining dependency "meter" 00:02:29.634 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.634 Message: lib/pci: Defining dependency "pci" 00:02:29.634 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.634 Message: lib/hash: Defining dependency "hash" 00:02:29.634 Message: lib/timer: Defining dependency "timer" 00:02:29.634 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.634 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.634 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.634 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.634 Message: lib/power: Defining dependency "power" 00:02:29.634 Message: lib/reorder: Defining dependency "reorder" 00:02:29.634 Message: lib/security: Defining dependency "security" 00:02:29.634 Has header "linux/userfaultfd.h" : YES 00:02:29.634 Has header "linux/vduse.h" : YES 00:02:29.634 Message: lib/vhost: Defining dependency "vhost" 00:02:29.634 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.634 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.634 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.634 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.634 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:29.634 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:29.634 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:29.634 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:29.634 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:29.634 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:29.634 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:29.634 Configuring doxy-api-html.conf using configuration 00:02:29.634 Configuring doxy-api-man.conf using configuration 00:02:29.634 Program mandb found: YES (/usr/bin/mandb) 00:02:29.634 Program sphinx-build found: NO 00:02:29.634 Configuring rte_build_config.h using configuration 00:02:29.634 Message: 00:02:29.634 ================= 00:02:29.634 Applications Enabled 00:02:29.634 ================= 00:02:29.634 00:02:29.634 apps: 00:02:29.634 00:02:29.634 00:02:29.634 Message: 00:02:29.634 ================= 00:02:29.634 Libraries Enabled 00:02:29.634 ================= 00:02:29.634 00:02:29.634 libs: 00:02:29.634 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.634 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:29.634 cryptodev, dmadev, power, reorder, security, vhost, 00:02:29.634 00:02:29.634 Message: 00:02:29.634 =============== 00:02:29.634 Drivers Enabled 00:02:29.634 =============== 00:02:29.634 00:02:29.634 common: 00:02:29.634 00:02:29.634 bus: 00:02:29.634 pci, vdev, 00:02:29.634 mempool: 00:02:29.634 ring, 00:02:29.634 dma: 00:02:29.634 00:02:29.634 net: 00:02:29.634 00:02:29.634 crypto: 00:02:29.634 00:02:29.634 compress: 00:02:29.634 00:02:29.634 vdpa: 00:02:29.634 00:02:29.634 00:02:29.634 Message: 00:02:29.634 ================= 00:02:29.634 Content Skipped 00:02:29.635 ================= 00:02:29.635 00:02:29.635 apps: 00:02:29.635 dumpcap: explicitly disabled via build config 00:02:29.635 graph: explicitly disabled via build config 00:02:29.635 pdump: explicitly disabled via build config 00:02:29.635 proc-info: explicitly disabled via build config 00:02:29.635 test-acl: explicitly disabled via build config 00:02:29.635 test-bbdev: explicitly disabled via build config 00:02:29.635 test-cmdline: explicitly disabled via build config 00:02:29.635 test-compress-perf: explicitly disabled via build config 00:02:29.635 test-crypto-perf: explicitly disabled via build config 00:02:29.635 test-dma-perf: explicitly disabled via build config 00:02:29.635 test-eventdev: explicitly disabled via build config 00:02:29.635 test-fib: explicitly disabled via build config 00:02:29.635 test-flow-perf: explicitly disabled via build config 00:02:29.635 test-gpudev: explicitly disabled via build config 00:02:29.635 test-mldev: explicitly disabled via build config 00:02:29.635 test-pipeline: explicitly disabled via build config 00:02:29.635 test-pmd: explicitly disabled via build config 00:02:29.635 test-regex: explicitly disabled via build config 00:02:29.635 test-sad: explicitly disabled via build config 00:02:29.635 test-security-perf: explicitly disabled via build config 00:02:29.635 00:02:29.635 libs: 00:02:29.635 argparse: explicitly disabled via build config 00:02:29.635 metrics: explicitly disabled via build config 00:02:29.635 acl: explicitly disabled via build config 00:02:29.635 bbdev: explicitly disabled via build config 00:02:29.635 bitratestats: explicitly disabled via build config 00:02:29.635 bpf: explicitly disabled via build config 00:02:29.635 cfgfile: explicitly disabled via build config 00:02:29.635 distributor: explicitly disabled via build config 00:02:29.635 efd: explicitly disabled via build config 00:02:29.635 eventdev: explicitly disabled via build config 00:02:29.635 dispatcher: explicitly disabled via build config 00:02:29.635 gpudev: explicitly disabled via build config 00:02:29.635 gro: explicitly disabled via build config 00:02:29.635 gso: explicitly disabled via build config 00:02:29.635 ip_frag: explicitly disabled via build config 00:02:29.635 jobstats: explicitly disabled via build config 00:02:29.635 latencystats: explicitly disabled via build config 00:02:29.635 lpm: explicitly disabled via build config 00:02:29.635 member: explicitly disabled via build config 00:02:29.635 pcapng: explicitly disabled via build config 00:02:29.635 rawdev: explicitly disabled via build config 00:02:29.635 regexdev: explicitly disabled via build config 00:02:29.635 mldev: explicitly disabled via build config 00:02:29.635 rib: explicitly disabled via build config 00:02:29.635 sched: explicitly disabled via build config 00:02:29.635 stack: explicitly disabled via build config 00:02:29.635 ipsec: explicitly disabled via build config 00:02:29.635 pdcp: explicitly disabled via build config 00:02:29.635 fib: explicitly disabled via build config 00:02:29.635 port: explicitly disabled via build config 00:02:29.635 pdump: explicitly disabled via build config 00:02:29.635 table: explicitly disabled via build config 00:02:29.635 pipeline: explicitly disabled via build config 00:02:29.635 graph: explicitly disabled via build config 00:02:29.635 node: explicitly disabled via build config 00:02:29.635 00:02:29.635 drivers: 00:02:29.635 common/cpt: not in enabled drivers build config 00:02:29.635 common/dpaax: not in enabled drivers build config 00:02:29.635 common/iavf: not in enabled drivers build config 00:02:29.635 common/idpf: not in enabled drivers build config 00:02:29.635 common/ionic: not in enabled drivers build config 00:02:29.635 common/mvep: not in enabled drivers build config 00:02:29.635 common/octeontx: not in enabled drivers build config 00:02:29.635 bus/auxiliary: not in enabled drivers build config 00:02:29.635 bus/cdx: not in enabled drivers build config 00:02:29.635 bus/dpaa: not in enabled drivers build config 00:02:29.635 bus/fslmc: not in enabled drivers build config 00:02:29.635 bus/ifpga: not in enabled drivers build config 00:02:29.635 bus/platform: not in enabled drivers build config 00:02:29.635 bus/uacce: not in enabled drivers build config 00:02:29.635 bus/vmbus: not in enabled drivers build config 00:02:29.635 common/cnxk: not in enabled drivers build config 00:02:29.635 common/mlx5: not in enabled drivers build config 00:02:29.635 common/nfp: not in enabled drivers build config 00:02:29.635 common/nitrox: not in enabled drivers build config 00:02:29.635 common/qat: not in enabled drivers build config 00:02:29.635 common/sfc_efx: not in enabled drivers build config 00:02:29.635 mempool/bucket: not in enabled drivers build config 00:02:29.635 mempool/cnxk: not in enabled drivers build config 00:02:29.635 mempool/dpaa: not in enabled drivers build config 00:02:29.635 mempool/dpaa2: not in enabled drivers build config 00:02:29.635 mempool/octeontx: not in enabled drivers build config 00:02:29.635 mempool/stack: not in enabled drivers build config 00:02:29.635 dma/cnxk: not in enabled drivers build config 00:02:29.635 dma/dpaa: not in enabled drivers build config 00:02:29.635 dma/dpaa2: not in enabled drivers build config 00:02:29.635 dma/hisilicon: not in enabled drivers build config 00:02:29.635 dma/idxd: not in enabled drivers build config 00:02:29.635 dma/ioat: not in enabled drivers build config 00:02:29.635 dma/skeleton: not in enabled drivers build config 00:02:29.635 net/af_packet: not in enabled drivers build config 00:02:29.635 net/af_xdp: not in enabled drivers build config 00:02:29.635 net/ark: not in enabled drivers build config 00:02:29.635 net/atlantic: not in enabled drivers build config 00:02:29.635 net/avp: not in enabled drivers build config 00:02:29.635 net/axgbe: not in enabled drivers build config 00:02:29.635 net/bnx2x: not in enabled drivers build config 00:02:29.635 net/bnxt: not in enabled drivers build config 00:02:29.635 net/bonding: not in enabled drivers build config 00:02:29.635 net/cnxk: not in enabled drivers build config 00:02:29.635 net/cpfl: not in enabled drivers build config 00:02:29.635 net/cxgbe: not in enabled drivers build config 00:02:29.635 net/dpaa: not in enabled drivers build config 00:02:29.635 net/dpaa2: not in enabled drivers build config 00:02:29.635 net/e1000: not in enabled drivers build config 00:02:29.635 net/ena: not in enabled drivers build config 00:02:29.635 net/enetc: not in enabled drivers build config 00:02:29.635 net/enetfec: not in enabled drivers build config 00:02:29.635 net/enic: not in enabled drivers build config 00:02:29.635 net/failsafe: not in enabled drivers build config 00:02:29.635 net/fm10k: not in enabled drivers build config 00:02:29.635 net/gve: not in enabled drivers build config 00:02:29.635 net/hinic: not in enabled drivers build config 00:02:29.635 net/hns3: not in enabled drivers build config 00:02:29.635 net/i40e: not in enabled drivers build config 00:02:29.635 net/iavf: not in enabled drivers build config 00:02:29.635 net/ice: not in enabled drivers build config 00:02:29.635 net/idpf: not in enabled drivers build config 00:02:29.635 net/igc: not in enabled drivers build config 00:02:29.635 net/ionic: not in enabled drivers build config 00:02:29.635 net/ipn3ke: not in enabled drivers build config 00:02:29.635 net/ixgbe: not in enabled drivers build config 00:02:29.635 net/mana: not in enabled drivers build config 00:02:29.635 net/memif: not in enabled drivers build config 00:02:29.635 net/mlx4: not in enabled drivers build config 00:02:29.635 net/mlx5: not in enabled drivers build config 00:02:29.635 net/mvneta: not in enabled drivers build config 00:02:29.635 net/mvpp2: not in enabled drivers build config 00:02:29.635 net/netvsc: not in enabled drivers build config 00:02:29.635 net/nfb: not in enabled drivers build config 00:02:29.635 net/nfp: not in enabled drivers build config 00:02:29.635 net/ngbe: not in enabled drivers build config 00:02:29.636 net/null: not in enabled drivers build config 00:02:29.636 net/octeontx: not in enabled drivers build config 00:02:29.636 net/octeon_ep: not in enabled drivers build config 00:02:29.636 net/pcap: not in enabled drivers build config 00:02:29.636 net/pfe: not in enabled drivers build config 00:02:29.636 net/qede: not in enabled drivers build config 00:02:29.636 net/ring: not in enabled drivers build config 00:02:29.636 net/sfc: not in enabled drivers build config 00:02:29.636 net/softnic: not in enabled drivers build config 00:02:29.636 net/tap: not in enabled drivers build config 00:02:29.636 net/thunderx: not in enabled drivers build config 00:02:29.636 net/txgbe: not in enabled drivers build config 00:02:29.636 net/vdev_netvsc: not in enabled drivers build config 00:02:29.636 net/vhost: not in enabled drivers build config 00:02:29.636 net/virtio: not in enabled drivers build config 00:02:29.636 net/vmxnet3: not in enabled drivers build config 00:02:29.636 raw/*: missing internal dependency, "rawdev" 00:02:29.636 crypto/armv8: not in enabled drivers build config 00:02:29.636 crypto/bcmfs: not in enabled drivers build config 00:02:29.636 crypto/caam_jr: not in enabled drivers build config 00:02:29.636 crypto/ccp: not in enabled drivers build config 00:02:29.636 crypto/cnxk: not in enabled drivers build config 00:02:29.636 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.636 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.636 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.636 crypto/mlx5: not in enabled drivers build config 00:02:29.636 crypto/mvsam: not in enabled drivers build config 00:02:29.636 crypto/nitrox: not in enabled drivers build config 00:02:29.636 crypto/null: not in enabled drivers build config 00:02:29.636 crypto/octeontx: not in enabled drivers build config 00:02:29.636 crypto/openssl: not in enabled drivers build config 00:02:29.636 crypto/scheduler: not in enabled drivers build config 00:02:29.636 crypto/uadk: not in enabled drivers build config 00:02:29.636 crypto/virtio: not in enabled drivers build config 00:02:29.636 compress/isal: not in enabled drivers build config 00:02:29.636 compress/mlx5: not in enabled drivers build config 00:02:29.636 compress/nitrox: not in enabled drivers build config 00:02:29.636 compress/octeontx: not in enabled drivers build config 00:02:29.636 compress/zlib: not in enabled drivers build config 00:02:29.636 regex/*: missing internal dependency, "regexdev" 00:02:29.636 ml/*: missing internal dependency, "mldev" 00:02:29.636 vdpa/ifc: not in enabled drivers build config 00:02:29.636 vdpa/mlx5: not in enabled drivers build config 00:02:29.636 vdpa/nfp: not in enabled drivers build config 00:02:29.636 vdpa/sfc: not in enabled drivers build config 00:02:29.636 event/*: missing internal dependency, "eventdev" 00:02:29.636 baseband/*: missing internal dependency, "bbdev" 00:02:29.636 gpu/*: missing internal dependency, "gpudev" 00:02:29.636 00:02:29.636 00:02:29.636 Build targets in project: 84 00:02:29.636 00:02:29.636 DPDK 24.03.0 00:02:29.636 00:02:29.636 User defined options 00:02:29.636 buildtype : debug 00:02:29.636 default_library : shared 00:02:29.636 libdir : lib 00:02:29.636 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:29.636 b_sanitize : address 00:02:29.636 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:29.636 c_link_args : 00:02:29.636 cpu_instruction_set: native 00:02:29.636 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:29.636 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:29.636 enable_docs : false 00:02:29.636 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:29.636 enable_kmods : false 00:02:29.636 max_lcores : 128 00:02:29.636 tests : false 00:02:29.636 00:02:29.636 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.636 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:29.636 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.636 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.636 [3/267] Linking static target lib/librte_kvargs.a 00:02:29.636 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.636 [5/267] Linking static target lib/librte_log.a 00:02:29.894 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.894 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.894 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.894 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.152 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.152 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.152 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.152 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.152 [14/267] Linking static target lib/librte_telemetry.a 00:02:30.152 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.152 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.152 [17/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.152 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.410 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.410 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.410 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.410 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.410 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.410 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.410 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.410 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.410 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.668 [28/267] Linking target lib/librte_log.so.24.1 00:02:30.668 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.668 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.668 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.668 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:30.668 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.668 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.668 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.927 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:30.927 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.927 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.927 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.927 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.927 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.927 [42/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:30.927 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.927 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.927 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.185 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.185 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.185 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.185 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.185 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.185 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.185 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.444 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.444 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.444 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.444 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.444 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.701 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.701 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:31.701 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.701 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.701 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.701 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.701 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.701 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.959 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.959 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.959 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.959 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.959 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.216 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.216 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.216 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:32.216 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:32.216 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:32.216 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.216 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.216 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.216 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.477 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.477 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:32.477 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:32.477 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.477 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.477 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.477 [86/267] Linking static target lib/librte_ring.a 00:02:32.747 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.747 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.747 [89/267] Linking static target lib/librte_eal.a 00:02:32.747 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.747 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.747 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:32.747 [93/267] Linking static target lib/librte_rcu.a 00:02:32.747 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.747 [95/267] Linking static target lib/librte_mempool.a 00:02:33.006 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.006 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.006 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.006 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.006 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.006 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.264 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.264 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.264 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.264 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.264 [106/267] Linking static target lib/librte_mbuf.a 00:02:33.264 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:33.264 [108/267] Linking static target lib/librte_net.a 00:02:33.264 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.264 [110/267] Linking static target lib/librte_meter.a 00:02:33.522 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.522 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:33.522 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.522 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.780 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.781 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.781 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.781 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.781 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.039 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:34.039 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:34.039 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.039 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.297 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:34.298 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.298 [126/267] Linking static target lib/librte_pci.a 00:02:34.298 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.298 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.298 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.298 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.298 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.556 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.556 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.556 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.556 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.556 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.556 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.556 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.556 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.556 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.556 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.556 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.556 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.556 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.556 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.556 [146/267] Linking static target lib/librte_cmdline.a 00:02:34.814 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:34.814 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.072 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.072 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.072 [151/267] Linking static target lib/librte_timer.a 00:02:35.072 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.072 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.072 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.072 [155/267] Linking static target lib/librte_ethdev.a 00:02:35.331 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.331 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.331 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.331 [159/267] Linking static target lib/librte_compressdev.a 00:02:35.331 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.331 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.331 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:35.331 [163/267] Linking static target lib/librte_hash.a 00:02:35.331 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.589 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.589 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.589 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.589 [168/267] Linking static target lib/librte_dmadev.a 00:02:35.589 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.882 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.882 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.882 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.882 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.882 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.141 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.141 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.141 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.141 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.141 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.141 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:36.141 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.400 [182/267] Linking static target lib/librte_power.a 00:02:36.400 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.400 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.400 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.400 [186/267] Linking static target lib/librte_reorder.a 00:02:36.657 [187/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.657 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.658 [189/267] Linking static target lib/librte_cryptodev.a 00:02:36.658 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.658 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.658 [192/267] Linking static target lib/librte_security.a 00:02:36.916 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.916 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:37.174 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.174 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.174 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.174 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:37.174 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.431 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:37.431 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.431 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.689 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.689 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.689 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.689 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.689 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.947 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.947 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.947 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.947 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.947 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.947 [213/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.947 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:37.947 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.947 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.947 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.947 [218/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.947 [219/267] Linking static target drivers/librte_bus_pci.a 00:02:38.205 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.205 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.205 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.205 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.205 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:38.205 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.463 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.721 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.657 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.657 [229/267] Linking target lib/librte_eal.so.24.1 00:02:39.915 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:39.915 [231/267] Linking target lib/librte_pci.so.24.1 00:02:39.915 [232/267] Linking target lib/librte_timer.so.24.1 00:02:39.915 [233/267] Linking target lib/librte_ring.so.24.1 00:02:39.915 [234/267] Linking target lib/librte_meter.so.24.1 00:02:39.915 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:39.915 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:39.915 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:39.915 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:40.173 [239/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.173 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:40.173 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:40.173 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:40.173 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:40.173 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:40.173 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:40.173 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.173 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:40.173 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:40.432 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:40.432 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:40.432 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:02:40.432 [252/267] Linking target lib/librte_net.so.24.1 00:02:40.432 [253/267] Linking target lib/librte_compressdev.so.24.1 00:02:40.432 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:40.432 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:40.432 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:40.432 [257/267] Linking target lib/librte_hash.so.24.1 00:02:40.432 [258/267] Linking target lib/librte_security.so.24.1 00:02:40.432 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.691 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:40.691 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.691 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:40.691 [263/267] Linking target lib/librte_power.so.24.1 00:02:41.258 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.258 [265/267] Linking static target lib/librte_vhost.a 00:02:42.629 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.629 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:42.629 INFO: autodetecting backend as ninja 00:02:42.629 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:54.874 CC lib/ut/ut.o 00:02:54.874 CC lib/ut_mock/mock.o 00:02:54.874 CC lib/log/log.o 00:02:54.874 CC lib/log/log_flags.o 00:02:54.874 CC lib/log/log_deprecated.o 00:02:54.874 LIB libspdk_ut_mock.a 00:02:54.874 LIB libspdk_ut.a 00:02:55.135 LIB libspdk_log.a 00:02:55.135 SO libspdk_ut_mock.so.6.0 00:02:55.135 SO libspdk_ut.so.2.0 00:02:55.135 SO libspdk_log.so.7.1 00:02:55.135 SYMLINK libspdk_ut_mock.so 00:02:55.135 SYMLINK libspdk_ut.so 00:02:55.135 SYMLINK libspdk_log.so 00:02:55.135 CC lib/util/base64.o 00:02:55.135 CXX lib/trace_parser/trace.o 00:02:55.135 CC lib/util/bit_array.o 00:02:55.135 CC lib/util/cpuset.o 00:02:55.135 CC lib/dma/dma.o 00:02:55.135 CC lib/util/crc32.o 00:02:55.135 CC lib/util/crc16.o 00:02:55.135 CC lib/util/crc32c.o 00:02:55.135 CC lib/ioat/ioat.o 00:02:55.395 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.395 CC lib/util/crc32_ieee.o 00:02:55.395 CC lib/util/crc64.o 00:02:55.395 CC lib/util/dif.o 00:02:55.396 CC lib/util/fd.o 00:02:55.396 LIB libspdk_dma.a 00:02:55.396 CC lib/util/fd_group.o 00:02:55.396 CC lib/util/file.o 00:02:55.396 SO libspdk_dma.so.5.0 00:02:55.396 CC lib/util/hexlify.o 00:02:55.396 CC lib/util/iov.o 00:02:55.396 SYMLINK libspdk_dma.so 00:02:55.396 CC lib/vfio_user/host/vfio_user.o 00:02:55.396 LIB libspdk_ioat.a 00:02:55.396 CC lib/util/math.o 00:02:55.656 SO libspdk_ioat.so.7.0 00:02:55.656 CC lib/util/net.o 00:02:55.656 CC lib/util/pipe.o 00:02:55.656 SYMLINK libspdk_ioat.so 00:02:55.656 CC lib/util/strerror_tls.o 00:02:55.656 CC lib/util/string.o 00:02:55.656 CC lib/util/uuid.o 00:02:55.656 CC lib/util/xor.o 00:02:55.656 CC lib/util/zipf.o 00:02:55.656 CC lib/util/md5.o 00:02:55.656 LIB libspdk_vfio_user.a 00:02:55.656 SO libspdk_vfio_user.so.5.0 00:02:55.656 SYMLINK libspdk_vfio_user.so 00:02:55.916 LIB libspdk_util.a 00:02:56.176 SO libspdk_util.so.10.1 00:02:56.176 SYMLINK libspdk_util.so 00:02:56.176 LIB libspdk_trace_parser.a 00:02:56.176 SO libspdk_trace_parser.so.6.0 00:02:56.435 SYMLINK libspdk_trace_parser.so 00:02:56.435 CC lib/env_dpdk/env.o 00:02:56.435 CC lib/env_dpdk/memory.o 00:02:56.435 CC lib/env_dpdk/pci.o 00:02:56.435 CC lib/idxd/idxd.o 00:02:56.435 CC lib/env_dpdk/init.o 00:02:56.435 CC lib/idxd/idxd_user.o 00:02:56.435 CC lib/json/json_parse.o 00:02:56.435 CC lib/rdma_utils/rdma_utils.o 00:02:56.435 CC lib/conf/conf.o 00:02:56.435 CC lib/vmd/vmd.o 00:02:56.435 LIB libspdk_conf.a 00:02:56.435 CC lib/json/json_util.o 00:02:56.435 CC lib/idxd/idxd_kernel.o 00:02:56.435 SO libspdk_conf.so.6.0 00:02:56.695 LIB libspdk_rdma_utils.a 00:02:56.695 SO libspdk_rdma_utils.so.1.0 00:02:56.695 SYMLINK libspdk_conf.so 00:02:56.695 CC lib/json/json_write.o 00:02:56.695 SYMLINK libspdk_rdma_utils.so 00:02:56.695 CC lib/vmd/led.o 00:02:56.695 CC lib/env_dpdk/threads.o 00:02:56.695 CC lib/env_dpdk/pci_ioat.o 00:02:56.695 CC lib/env_dpdk/pci_virtio.o 00:02:56.695 CC lib/env_dpdk/pci_vmd.o 00:02:56.695 CC lib/env_dpdk/pci_idxd.o 00:02:56.695 CC lib/env_dpdk/pci_event.o 00:02:56.695 CC lib/rdma_provider/common.o 00:02:56.956 LIB libspdk_json.a 00:02:56.956 CC lib/env_dpdk/sigbus_handler.o 00:02:56.956 CC lib/env_dpdk/pci_dpdk.o 00:02:56.956 SO libspdk_json.so.6.0 00:02:56.956 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.956 LIB libspdk_idxd.a 00:02:56.956 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.956 SYMLINK libspdk_json.so 00:02:56.956 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:56.956 SO libspdk_idxd.so.12.1 00:02:56.956 LIB libspdk_vmd.a 00:02:56.956 SO libspdk_vmd.so.6.0 00:02:56.956 SYMLINK libspdk_idxd.so 00:02:56.956 SYMLINK libspdk_vmd.so 00:02:57.217 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.217 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.217 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.217 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.217 LIB libspdk_rdma_provider.a 00:02:57.217 SO libspdk_rdma_provider.so.7.0 00:02:57.217 SYMLINK libspdk_rdma_provider.so 00:02:57.478 LIB libspdk_jsonrpc.a 00:02:57.478 SO libspdk_jsonrpc.so.6.0 00:02:57.478 SYMLINK libspdk_jsonrpc.so 00:02:57.739 CC lib/rpc/rpc.o 00:02:57.739 LIB libspdk_env_dpdk.a 00:02:57.739 SO libspdk_env_dpdk.so.15.1 00:02:57.999 LIB libspdk_rpc.a 00:02:57.999 SYMLINK libspdk_env_dpdk.so 00:02:57.999 SO libspdk_rpc.so.6.0 00:02:57.999 SYMLINK libspdk_rpc.so 00:02:58.261 CC lib/trace/trace_flags.o 00:02:58.261 CC lib/trace/trace_rpc.o 00:02:58.261 CC lib/trace/trace.o 00:02:58.261 CC lib/keyring/keyring.o 00:02:58.261 CC lib/keyring/keyring_rpc.o 00:02:58.261 CC lib/notify/notify.o 00:02:58.261 CC lib/notify/notify_rpc.o 00:02:58.261 LIB libspdk_notify.a 00:02:58.261 SO libspdk_notify.so.6.0 00:02:58.261 SYMLINK libspdk_notify.so 00:02:58.261 LIB libspdk_keyring.a 00:02:58.261 LIB libspdk_trace.a 00:02:58.261 SO libspdk_keyring.so.2.0 00:02:58.261 SO libspdk_trace.so.11.0 00:02:58.521 SYMLINK libspdk_keyring.so 00:02:58.522 SYMLINK libspdk_trace.so 00:02:58.522 CC lib/sock/sock.o 00:02:58.522 CC lib/sock/sock_rpc.o 00:02:58.522 CC lib/thread/thread.o 00:02:58.522 CC lib/thread/iobuf.o 00:02:59.092 LIB libspdk_sock.a 00:02:59.092 SO libspdk_sock.so.10.0 00:02:59.092 SYMLINK libspdk_sock.so 00:02:59.351 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.351 CC lib/nvme/nvme_ctrlr.o 00:02:59.351 CC lib/nvme/nvme_ns.o 00:02:59.351 CC lib/nvme/nvme_fabric.o 00:02:59.351 CC lib/nvme/nvme_ns_cmd.o 00:02:59.351 CC lib/nvme/nvme_pcie.o 00:02:59.351 CC lib/nvme/nvme_pcie_common.o 00:02:59.351 CC lib/nvme/nvme_qpair.o 00:02:59.351 CC lib/nvme/nvme.o 00:02:59.917 CC lib/nvme/nvme_quirks.o 00:02:59.917 CC lib/nvme/nvme_transport.o 00:02:59.917 CC lib/nvme/nvme_discovery.o 00:02:59.917 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.917 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:00.175 CC lib/nvme/nvme_tcp.o 00:03:00.175 LIB libspdk_thread.a 00:03:00.175 SO libspdk_thread.so.11.0 00:03:00.175 CC lib/nvme/nvme_opal.o 00:03:00.175 SYMLINK libspdk_thread.so 00:03:00.175 CC lib/nvme/nvme_io_msg.o 00:03:00.434 CC lib/nvme/nvme_poll_group.o 00:03:00.434 CC lib/nvme/nvme_zns.o 00:03:00.434 CC lib/nvme/nvme_stubs.o 00:03:00.434 CC lib/nvme/nvme_auth.o 00:03:00.434 CC lib/nvme/nvme_cuse.o 00:03:00.434 CC lib/nvme/nvme_rdma.o 00:03:01.000 CC lib/accel/accel.o 00:03:01.000 CC lib/blob/blobstore.o 00:03:01.000 CC lib/init/json_config.o 00:03:01.000 CC lib/blob/request.o 00:03:01.000 CC lib/virtio/virtio.o 00:03:01.259 CC lib/init/subsystem.o 00:03:01.259 CC lib/init/subsystem_rpc.o 00:03:01.259 CC lib/fsdev/fsdev.o 00:03:01.259 CC lib/fsdev/fsdev_io.o 00:03:01.259 CC lib/blob/zeroes.o 00:03:01.259 CC lib/init/rpc.o 00:03:01.259 CC lib/virtio/virtio_vhost_user.o 00:03:01.259 CC lib/fsdev/fsdev_rpc.o 00:03:01.518 CC lib/blob/blob_bs_dev.o 00:03:01.518 CC lib/accel/accel_rpc.o 00:03:01.518 LIB libspdk_init.a 00:03:01.518 SO libspdk_init.so.6.0 00:03:01.518 CC lib/accel/accel_sw.o 00:03:01.518 SYMLINK libspdk_init.so 00:03:01.518 CC lib/virtio/virtio_vfio_user.o 00:03:01.518 CC lib/virtio/virtio_pci.o 00:03:01.776 CC lib/event/app.o 00:03:01.776 CC lib/event/log_rpc.o 00:03:01.776 CC lib/event/reactor.o 00:03:01.776 LIB libspdk_fsdev.a 00:03:01.776 SO libspdk_fsdev.so.2.0 00:03:01.776 CC lib/event/app_rpc.o 00:03:01.776 SYMLINK libspdk_fsdev.so 00:03:01.776 CC lib/event/scheduler_static.o 00:03:01.776 LIB libspdk_nvme.a 00:03:01.776 LIB libspdk_virtio.a 00:03:02.035 SO libspdk_virtio.so.7.0 00:03:02.035 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:02.035 LIB libspdk_accel.a 00:03:02.035 SO libspdk_accel.so.16.0 00:03:02.035 SYMLINK libspdk_virtio.so 00:03:02.035 SO libspdk_nvme.so.15.0 00:03:02.035 SYMLINK libspdk_accel.so 00:03:02.035 LIB libspdk_event.a 00:03:02.293 SO libspdk_event.so.14.0 00:03:02.293 SYMLINK libspdk_nvme.so 00:03:02.293 CC lib/bdev/bdev_rpc.o 00:03:02.293 CC lib/bdev/bdev_zone.o 00:03:02.293 CC lib/bdev/bdev.o 00:03:02.293 CC lib/bdev/scsi_nvme.o 00:03:02.293 CC lib/bdev/part.o 00:03:02.293 SYMLINK libspdk_event.so 00:03:02.567 LIB libspdk_fuse_dispatcher.a 00:03:02.567 SO libspdk_fuse_dispatcher.so.1.0 00:03:02.567 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.971 LIB libspdk_blob.a 00:03:04.231 SO libspdk_blob.so.12.0 00:03:04.231 SYMLINK libspdk_blob.so 00:03:04.491 CC lib/blobfs/blobfs.o 00:03:04.491 CC lib/blobfs/tree.o 00:03:04.491 CC lib/lvol/lvol.o 00:03:04.491 LIB libspdk_bdev.a 00:03:04.491 SO libspdk_bdev.so.17.0 00:03:04.491 SYMLINK libspdk_bdev.so 00:03:04.750 CC lib/nbd/nbd.o 00:03:04.750 CC lib/nbd/nbd_rpc.o 00:03:04.750 CC lib/scsi/dev.o 00:03:04.750 CC lib/scsi/port.o 00:03:04.750 CC lib/scsi/lun.o 00:03:04.750 CC lib/ublk/ublk.o 00:03:04.750 CC lib/nvmf/ctrlr.o 00:03:04.750 CC lib/ftl/ftl_core.o 00:03:05.009 CC lib/ftl/ftl_init.o 00:03:05.009 CC lib/ftl/ftl_layout.o 00:03:05.009 CC lib/ftl/ftl_debug.o 00:03:05.009 CC lib/scsi/scsi.o 00:03:05.009 CC lib/nvmf/ctrlr_discovery.o 00:03:05.009 LIB libspdk_nbd.a 00:03:05.009 CC lib/nvmf/ctrlr_bdev.o 00:03:05.009 SO libspdk_nbd.so.7.0 00:03:05.009 CC lib/ftl/ftl_io.o 00:03:05.009 SYMLINK libspdk_nbd.so 00:03:05.009 CC lib/ftl/ftl_sb.o 00:03:05.267 CC lib/scsi/scsi_bdev.o 00:03:05.267 CC lib/ftl/ftl_l2p.o 00:03:05.267 LIB libspdk_blobfs.a 00:03:05.267 CC lib/ftl/ftl_l2p_flat.o 00:03:05.267 SO libspdk_blobfs.so.11.0 00:03:05.267 CC lib/nvmf/subsystem.o 00:03:05.267 SYMLINK libspdk_blobfs.so 00:03:05.267 CC lib/nvmf/nvmf.o 00:03:05.267 LIB libspdk_lvol.a 00:03:05.267 SO libspdk_lvol.so.11.0 00:03:05.267 CC lib/ublk/ublk_rpc.o 00:03:05.267 CC lib/scsi/scsi_pr.o 00:03:05.267 CC lib/scsi/scsi_rpc.o 00:03:05.526 CC lib/ftl/ftl_nv_cache.o 00:03:05.526 SYMLINK libspdk_lvol.so 00:03:05.526 CC lib/scsi/task.o 00:03:05.526 CC lib/nvmf/nvmf_rpc.o 00:03:05.526 LIB libspdk_ublk.a 00:03:05.526 SO libspdk_ublk.so.3.0 00:03:05.526 CC lib/nvmf/transport.o 00:03:05.526 CC lib/nvmf/tcp.o 00:03:05.526 SYMLINK libspdk_ublk.so 00:03:05.526 CC lib/ftl/ftl_band.o 00:03:05.526 CC lib/nvmf/stubs.o 00:03:05.785 LIB libspdk_scsi.a 00:03:05.785 SO libspdk_scsi.so.9.0 00:03:05.785 SYMLINK libspdk_scsi.so 00:03:05.785 CC lib/nvmf/mdns_server.o 00:03:06.044 CC lib/ftl/ftl_band_ops.o 00:03:06.044 CC lib/nvmf/rdma.o 00:03:06.044 CC lib/ftl/ftl_writer.o 00:03:06.044 CC lib/nvmf/auth.o 00:03:06.302 CC lib/ftl/ftl_rq.o 00:03:06.302 CC lib/iscsi/conn.o 00:03:06.302 CC lib/ftl/ftl_l2p_cache.o 00:03:06.302 CC lib/ftl/ftl_reloc.o 00:03:06.302 CC lib/vhost/vhost.o 00:03:06.560 CC lib/vhost/vhost_rpc.o 00:03:06.560 CC lib/iscsi/init_grp.o 00:03:06.560 CC lib/ftl/ftl_p2l.o 00:03:06.560 CC lib/iscsi/iscsi.o 00:03:06.817 CC lib/iscsi/param.o 00:03:06.817 CC lib/vhost/vhost_scsi.o 00:03:06.817 CC lib/vhost/vhost_blk.o 00:03:06.817 CC lib/vhost/rte_vhost_user.o 00:03:06.817 CC lib/ftl/ftl_p2l_log.o 00:03:06.817 CC lib/ftl/mngt/ftl_mngt.o 00:03:07.077 CC lib/iscsi/portal_grp.o 00:03:07.077 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.077 CC lib/iscsi/tgt_node.o 00:03:07.077 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.077 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:07.077 CC lib/iscsi/iscsi_subsystem.o 00:03:07.336 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:07.336 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:07.336 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:07.336 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.336 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.596 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.596 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.596 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.596 CC lib/iscsi/iscsi_rpc.o 00:03:07.596 LIB libspdk_vhost.a 00:03:07.596 SO libspdk_vhost.so.8.0 00:03:07.596 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.596 CC lib/ftl/utils/ftl_conf.o 00:03:07.596 CC lib/iscsi/task.o 00:03:07.596 CC lib/ftl/utils/ftl_md.o 00:03:07.596 CC lib/ftl/utils/ftl_mempool.o 00:03:07.596 SYMLINK libspdk_vhost.so 00:03:07.596 CC lib/ftl/utils/ftl_bitmap.o 00:03:07.854 CC lib/ftl/utils/ftl_property.o 00:03:07.854 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:07.854 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:07.854 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.854 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.854 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.854 LIB libspdk_iscsi.a 00:03:07.854 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.854 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:07.854 SO libspdk_iscsi.so.8.0 00:03:07.854 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.854 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.854 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.854 LIB libspdk_nvmf.a 00:03:07.854 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.854 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.113 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.113 CC lib/ftl/base/ftl_base_dev.o 00:03:08.113 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.113 SYMLINK libspdk_iscsi.so 00:03:08.113 CC lib/ftl/ftl_trace.o 00:03:08.113 SO libspdk_nvmf.so.20.0 00:03:08.113 SYMLINK libspdk_nvmf.so 00:03:08.113 LIB libspdk_ftl.a 00:03:08.372 SO libspdk_ftl.so.9.0 00:03:08.630 SYMLINK libspdk_ftl.so 00:03:08.888 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.888 CC module/keyring/file/keyring.o 00:03:08.889 CC module/keyring/linux/keyring.o 00:03:08.889 CC module/blob/bdev/blob_bdev.o 00:03:08.889 CC module/accel/dsa/accel_dsa.o 00:03:08.889 CC module/sock/posix/posix.o 00:03:08.889 CC module/accel/error/accel_error.o 00:03:08.889 CC module/accel/ioat/accel_ioat.o 00:03:08.889 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.889 CC module/fsdev/aio/fsdev_aio.o 00:03:08.889 LIB libspdk_env_dpdk_rpc.a 00:03:08.889 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.889 CC module/keyring/linux/keyring_rpc.o 00:03:08.889 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.889 CC module/keyring/file/keyring_rpc.o 00:03:08.889 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:08.889 CC module/accel/error/accel_error_rpc.o 00:03:09.147 CC module/accel/ioat/accel_ioat_rpc.o 00:03:09.147 LIB libspdk_keyring_linux.a 00:03:09.147 LIB libspdk_scheduler_dynamic.a 00:03:09.147 LIB libspdk_blob_bdev.a 00:03:09.147 SO libspdk_keyring_linux.so.1.0 00:03:09.147 SO libspdk_scheduler_dynamic.so.4.0 00:03:09.147 SO libspdk_blob_bdev.so.12.0 00:03:09.147 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.147 LIB libspdk_accel_error.a 00:03:09.147 SYMLINK libspdk_keyring_linux.so 00:03:09.147 LIB libspdk_keyring_file.a 00:03:09.147 SYMLINK libspdk_blob_bdev.so 00:03:09.147 SYMLINK libspdk_scheduler_dynamic.so 00:03:09.147 CC module/fsdev/aio/linux_aio_mgr.o 00:03:09.147 LIB libspdk_accel_ioat.a 00:03:09.147 SO libspdk_accel_error.so.2.0 00:03:09.147 SO libspdk_keyring_file.so.2.0 00:03:09.147 SO libspdk_accel_ioat.so.6.0 00:03:09.147 LIB libspdk_accel_dsa.a 00:03:09.147 SYMLINK libspdk_accel_error.so 00:03:09.147 SYMLINK libspdk_keyring_file.so 00:03:09.147 SO libspdk_accel_dsa.so.5.0 00:03:09.147 SYMLINK libspdk_accel_ioat.so 00:03:09.147 SYMLINK libspdk_accel_dsa.so 00:03:09.147 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:09.147 CC module/accel/iaa/accel_iaa.o 00:03:09.147 CC module/accel/iaa/accel_iaa_rpc.o 00:03:09.406 CC module/scheduler/gscheduler/gscheduler.o 00:03:09.406 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.406 LIB libspdk_scheduler_gscheduler.a 00:03:09.406 LIB libspdk_scheduler_dpdk_governor.a 00:03:09.406 CC module/bdev/error/vbdev_error.o 00:03:09.406 CC module/bdev/delay/vbdev_delay.o 00:03:09.406 SO libspdk_scheduler_gscheduler.so.4.0 00:03:09.406 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:09.406 LIB libspdk_accel_iaa.a 00:03:09.406 SO libspdk_accel_iaa.so.3.0 00:03:09.406 CC module/bdev/gpt/gpt.o 00:03:09.406 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:09.406 SYMLINK libspdk_scheduler_gscheduler.so 00:03:09.406 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.406 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.406 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.406 LIB libspdk_sock_posix.a 00:03:09.406 SYMLINK libspdk_accel_iaa.so 00:03:09.406 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.406 SO libspdk_sock_posix.so.6.0 00:03:09.664 LIB libspdk_fsdev_aio.a 00:03:09.664 LIB libspdk_blobfs_bdev.a 00:03:09.664 SO libspdk_fsdev_aio.so.1.0 00:03:09.664 SYMLINK libspdk_sock_posix.so 00:03:09.664 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.664 SO libspdk_blobfs_bdev.so.6.0 00:03:09.664 CC module/bdev/malloc/bdev_malloc.o 00:03:09.664 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.664 SYMLINK libspdk_fsdev_aio.so 00:03:09.664 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.664 SYMLINK libspdk_blobfs_bdev.so 00:03:09.664 LIB libspdk_bdev_delay.a 00:03:09.664 SO libspdk_bdev_delay.so.6.0 00:03:09.664 LIB libspdk_bdev_gpt.a 00:03:09.664 SO libspdk_bdev_gpt.so.6.0 00:03:09.665 CC module/bdev/nvme/bdev_nvme.o 00:03:09.665 SYMLINK libspdk_bdev_delay.so 00:03:09.665 CC module/bdev/null/bdev_null.o 00:03:09.665 LIB libspdk_bdev_error.a 00:03:09.665 SYMLINK libspdk_bdev_gpt.so 00:03:09.665 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.923 SO libspdk_bdev_error.so.6.0 00:03:09.923 SYMLINK libspdk_bdev_error.so 00:03:09.923 LIB libspdk_bdev_malloc.a 00:03:09.923 CC module/bdev/raid/bdev_raid.o 00:03:09.923 CC module/bdev/split/vbdev_split.o 00:03:09.923 SO libspdk_bdev_malloc.so.6.0 00:03:09.923 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.923 LIB libspdk_bdev_lvol.a 00:03:09.923 SYMLINK libspdk_bdev_malloc.so 00:03:09.923 CC module/bdev/xnvme/bdev_xnvme.o 00:03:09.923 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.923 SO libspdk_bdev_lvol.so.6.0 00:03:09.923 CC module/bdev/aio/bdev_aio.o 00:03:09.923 CC module/bdev/null/bdev_null_rpc.o 00:03:10.182 SYMLINK libspdk_bdev_lvol.so 00:03:10.182 CC module/bdev/raid/bdev_raid_sb.o 00:03:10.182 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.182 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.182 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:10.182 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.182 LIB libspdk_bdev_null.a 00:03:10.182 SO libspdk_bdev_null.so.6.0 00:03:10.182 LIB libspdk_bdev_passthru.a 00:03:10.182 LIB libspdk_bdev_split.a 00:03:10.182 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:10.182 SO libspdk_bdev_passthru.so.6.0 00:03:10.182 SO libspdk_bdev_split.so.6.0 00:03:10.182 SYMLINK libspdk_bdev_null.so 00:03:10.182 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.182 LIB libspdk_bdev_xnvme.a 00:03:10.440 SYMLINK libspdk_bdev_passthru.so 00:03:10.440 SO libspdk_bdev_xnvme.so.3.0 00:03:10.440 CC module/bdev/nvme/nvme_rpc.o 00:03:10.440 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.440 SYMLINK libspdk_bdev_split.so 00:03:10.440 LIB libspdk_bdev_aio.a 00:03:10.440 SO libspdk_bdev_aio.so.6.0 00:03:10.440 LIB libspdk_bdev_zone_block.a 00:03:10.440 SYMLINK libspdk_bdev_xnvme.so 00:03:10.440 SO libspdk_bdev_zone_block.so.6.0 00:03:10.440 CC module/bdev/nvme/vbdev_opal.o 00:03:10.440 SYMLINK libspdk_bdev_aio.so 00:03:10.440 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.440 CC module/bdev/raid/raid0.o 00:03:10.440 CC module/bdev/ftl/bdev_ftl.o 00:03:10.440 SYMLINK libspdk_bdev_zone_block.so 00:03:10.440 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.440 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.440 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.698 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:10.698 CC module/bdev/raid/raid1.o 00:03:10.699 CC module/bdev/raid/concat.o 00:03:10.699 LIB libspdk_bdev_ftl.a 00:03:10.699 SO libspdk_bdev_ftl.so.6.0 00:03:10.957 SYMLINK libspdk_bdev_ftl.so 00:03:10.957 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.957 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.957 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.957 LIB libspdk_bdev_iscsi.a 00:03:10.957 SO libspdk_bdev_iscsi.so.6.0 00:03:10.957 LIB libspdk_bdev_raid.a 00:03:10.957 SYMLINK libspdk_bdev_iscsi.so 00:03:10.957 SO libspdk_bdev_raid.so.6.0 00:03:10.957 SYMLINK libspdk_bdev_raid.so 00:03:11.215 LIB libspdk_bdev_virtio.a 00:03:11.215 SO libspdk_bdev_virtio.so.6.0 00:03:11.476 SYMLINK libspdk_bdev_virtio.so 00:03:11.736 LIB libspdk_bdev_nvme.a 00:03:11.995 SO libspdk_bdev_nvme.so.7.1 00:03:11.995 SYMLINK libspdk_bdev_nvme.so 00:03:12.256 CC module/event/subsystems/vmd/vmd.o 00:03:12.256 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.256 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.256 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.256 CC module/event/subsystems/keyring/keyring.o 00:03:12.256 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.256 CC module/event/subsystems/sock/sock.o 00:03:12.256 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.256 CC module/event/subsystems/fsdev/fsdev.o 00:03:12.516 LIB libspdk_event_keyring.a 00:03:12.516 LIB libspdk_event_vmd.a 00:03:12.516 LIB libspdk_event_vhost_blk.a 00:03:12.516 LIB libspdk_event_scheduler.a 00:03:12.516 LIB libspdk_event_sock.a 00:03:12.516 LIB libspdk_event_fsdev.a 00:03:12.516 SO libspdk_event_keyring.so.1.0 00:03:12.516 SO libspdk_event_vmd.so.6.0 00:03:12.516 LIB libspdk_event_iobuf.a 00:03:12.516 SO libspdk_event_vhost_blk.so.3.0 00:03:12.516 SO libspdk_event_scheduler.so.4.0 00:03:12.516 SO libspdk_event_sock.so.5.0 00:03:12.516 SO libspdk_event_fsdev.so.1.0 00:03:12.516 SO libspdk_event_iobuf.so.3.0 00:03:12.516 SYMLINK libspdk_event_keyring.so 00:03:12.516 SYMLINK libspdk_event_sock.so 00:03:12.517 SYMLINK libspdk_event_vmd.so 00:03:12.517 SYMLINK libspdk_event_scheduler.so 00:03:12.517 SYMLINK libspdk_event_vhost_blk.so 00:03:12.517 SYMLINK libspdk_event_iobuf.so 00:03:12.517 SYMLINK libspdk_event_fsdev.so 00:03:12.777 CC module/event/subsystems/accel/accel.o 00:03:13.038 LIB libspdk_event_accel.a 00:03:13.038 SO libspdk_event_accel.so.6.0 00:03:13.038 SYMLINK libspdk_event_accel.so 00:03:13.299 CC module/event/subsystems/bdev/bdev.o 00:03:13.299 LIB libspdk_event_bdev.a 00:03:13.299 SO libspdk_event_bdev.so.6.0 00:03:13.299 SYMLINK libspdk_event_bdev.so 00:03:13.558 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.558 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.558 CC module/event/subsystems/nbd/nbd.o 00:03:13.558 CC module/event/subsystems/ublk/ublk.o 00:03:13.558 CC module/event/subsystems/scsi/scsi.o 00:03:13.818 LIB libspdk_event_nbd.a 00:03:13.818 LIB libspdk_event_ublk.a 00:03:13.818 LIB libspdk_event_scsi.a 00:03:13.818 SO libspdk_event_nbd.so.6.0 00:03:13.818 SO libspdk_event_ublk.so.3.0 00:03:13.818 SO libspdk_event_scsi.so.6.0 00:03:13.818 SYMLINK libspdk_event_nbd.so 00:03:13.818 LIB libspdk_event_nvmf.a 00:03:13.818 SYMLINK libspdk_event_ublk.so 00:03:13.818 SYMLINK libspdk_event_scsi.so 00:03:13.818 SO libspdk_event_nvmf.so.6.0 00:03:13.818 SYMLINK libspdk_event_nvmf.so 00:03:14.078 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.078 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.078 LIB libspdk_event_vhost_scsi.a 00:03:14.078 LIB libspdk_event_iscsi.a 00:03:14.078 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.078 SO libspdk_event_iscsi.so.6.0 00:03:14.078 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.078 SYMLINK libspdk_event_iscsi.so 00:03:14.337 SO libspdk.so.6.0 00:03:14.337 SYMLINK libspdk.so 00:03:14.596 CC test/rpc_client/rpc_client_test.o 00:03:14.596 CXX app/trace/trace.o 00:03:14.596 TEST_HEADER include/spdk/accel.h 00:03:14.596 TEST_HEADER include/spdk/accel_module.h 00:03:14.596 TEST_HEADER include/spdk/assert.h 00:03:14.596 TEST_HEADER include/spdk/barrier.h 00:03:14.596 TEST_HEADER include/spdk/base64.h 00:03:14.596 TEST_HEADER include/spdk/bdev.h 00:03:14.596 TEST_HEADER include/spdk/bdev_module.h 00:03:14.596 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.596 TEST_HEADER include/spdk/bit_array.h 00:03:14.596 TEST_HEADER include/spdk/bit_pool.h 00:03:14.596 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.596 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.596 TEST_HEADER include/spdk/blobfs.h 00:03:14.596 TEST_HEADER include/spdk/blob.h 00:03:14.596 TEST_HEADER include/spdk/conf.h 00:03:14.596 TEST_HEADER include/spdk/config.h 00:03:14.596 TEST_HEADER include/spdk/cpuset.h 00:03:14.596 TEST_HEADER include/spdk/crc16.h 00:03:14.596 TEST_HEADER include/spdk/crc32.h 00:03:14.596 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.596 TEST_HEADER include/spdk/crc64.h 00:03:14.596 TEST_HEADER include/spdk/dif.h 00:03:14.596 TEST_HEADER include/spdk/dma.h 00:03:14.596 TEST_HEADER include/spdk/endian.h 00:03:14.596 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.596 TEST_HEADER include/spdk/env.h 00:03:14.596 TEST_HEADER include/spdk/event.h 00:03:14.596 TEST_HEADER include/spdk/fd_group.h 00:03:14.596 TEST_HEADER include/spdk/fd.h 00:03:14.596 TEST_HEADER include/spdk/file.h 00:03:14.596 TEST_HEADER include/spdk/fsdev.h 00:03:14.596 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.596 TEST_HEADER include/spdk/ftl.h 00:03:14.596 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.596 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.596 TEST_HEADER include/spdk/hexlify.h 00:03:14.596 TEST_HEADER include/spdk/histogram_data.h 00:03:14.596 TEST_HEADER include/spdk/idxd.h 00:03:14.596 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.596 TEST_HEADER include/spdk/init.h 00:03:14.596 TEST_HEADER include/spdk/ioat.h 00:03:14.596 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.596 CC examples/ioat/perf/perf.o 00:03:14.596 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.596 CC examples/util/zipf/zipf.o 00:03:14.596 TEST_HEADER include/spdk/json.h 00:03:14.596 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.596 TEST_HEADER include/spdk/keyring.h 00:03:14.596 TEST_HEADER include/spdk/keyring_module.h 00:03:14.596 TEST_HEADER include/spdk/likely.h 00:03:14.596 CC test/thread/poller_perf/poller_perf.o 00:03:14.596 TEST_HEADER include/spdk/log.h 00:03:14.596 TEST_HEADER include/spdk/lvol.h 00:03:14.596 TEST_HEADER include/spdk/md5.h 00:03:14.596 TEST_HEADER include/spdk/memory.h 00:03:14.596 TEST_HEADER include/spdk/mmio.h 00:03:14.597 TEST_HEADER include/spdk/nbd.h 00:03:14.597 TEST_HEADER include/spdk/net.h 00:03:14.597 CC test/dma/test_dma/test_dma.o 00:03:14.597 TEST_HEADER include/spdk/notify.h 00:03:14.597 TEST_HEADER include/spdk/nvme.h 00:03:14.597 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.597 CC test/app/bdev_svc/bdev_svc.o 00:03:14.597 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.597 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.597 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.597 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.597 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.597 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.597 TEST_HEADER include/spdk/nvmf.h 00:03:14.597 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.597 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.597 TEST_HEADER include/spdk/opal.h 00:03:14.597 TEST_HEADER include/spdk/opal_spec.h 00:03:14.597 TEST_HEADER include/spdk/pci_ids.h 00:03:14.597 TEST_HEADER include/spdk/pipe.h 00:03:14.597 TEST_HEADER include/spdk/queue.h 00:03:14.597 TEST_HEADER include/spdk/reduce.h 00:03:14.597 TEST_HEADER include/spdk/rpc.h 00:03:14.597 TEST_HEADER include/spdk/scheduler.h 00:03:14.597 TEST_HEADER include/spdk/scsi.h 00:03:14.597 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.597 TEST_HEADER include/spdk/sock.h 00:03:14.597 TEST_HEADER include/spdk/stdinc.h 00:03:14.597 TEST_HEADER include/spdk/string.h 00:03:14.597 TEST_HEADER include/spdk/thread.h 00:03:14.597 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.597 TEST_HEADER include/spdk/trace.h 00:03:14.597 TEST_HEADER include/spdk/trace_parser.h 00:03:14.597 TEST_HEADER include/spdk/tree.h 00:03:14.597 TEST_HEADER include/spdk/ublk.h 00:03:14.597 TEST_HEADER include/spdk/util.h 00:03:14.597 TEST_HEADER include/spdk/uuid.h 00:03:14.597 TEST_HEADER include/spdk/version.h 00:03:14.597 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.597 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.597 TEST_HEADER include/spdk/vhost.h 00:03:14.597 TEST_HEADER include/spdk/vmd.h 00:03:14.597 TEST_HEADER include/spdk/xor.h 00:03:14.597 TEST_HEADER include/spdk/zipf.h 00:03:14.597 CXX test/cpp_headers/accel.o 00:03:14.597 LINK rpc_client_test 00:03:14.597 LINK interrupt_tgt 00:03:14.597 LINK poller_perf 00:03:14.597 LINK zipf 00:03:14.597 LINK ioat_perf 00:03:14.597 LINK bdev_svc 00:03:14.855 LINK spdk_trace 00:03:14.855 CXX test/cpp_headers/accel_module.o 00:03:14.855 CC examples/ioat/verify/verify.o 00:03:14.855 CC test/env/vtophys/vtophys.o 00:03:14.855 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.855 CC test/env/memory/memory_ut.o 00:03:14.855 CXX test/cpp_headers/assert.o 00:03:14.855 CC test/env/pci/pci_ut.o 00:03:14.855 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.855 CC app/trace_record/trace_record.o 00:03:14.855 LINK vtophys 00:03:15.114 LINK verify 00:03:15.114 LINK mem_callbacks 00:03:15.114 LINK test_dma 00:03:15.114 LINK env_dpdk_post_init 00:03:15.114 CXX test/cpp_headers/barrier.o 00:03:15.114 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.114 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.114 LINK spdk_trace_record 00:03:15.114 CXX test/cpp_headers/base64.o 00:03:15.373 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.373 CC examples/thread/thread/thread_ex.o 00:03:15.373 LINK pci_ut 00:03:15.373 CXX test/cpp_headers/bdev.o 00:03:15.373 LINK nvme_fuzz 00:03:15.373 CC examples/sock/hello_world/hello_sock.o 00:03:15.373 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.373 CC app/nvmf_tgt/nvmf_main.o 00:03:15.373 LINK lsvmd 00:03:15.373 CXX test/cpp_headers/bdev_module.o 00:03:15.373 LINK thread 00:03:15.632 CC examples/vmd/led/led.o 00:03:15.632 LINK hello_sock 00:03:15.632 LINK nvmf_tgt 00:03:15.632 CXX test/cpp_headers/bdev_zone.o 00:03:15.632 CC examples/idxd/perf/perf.o 00:03:15.632 CXX test/cpp_headers/bit_array.o 00:03:15.632 LINK vhost_fuzz 00:03:15.632 LINK led 00:03:15.632 CXX test/cpp_headers/bit_pool.o 00:03:15.891 CC examples/accel/perf/accel_perf.o 00:03:15.891 CC examples/blob/hello_world/hello_blob.o 00:03:15.891 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.891 CC examples/nvme/hello_world/hello_world.o 00:03:15.891 CC examples/blob/cli/blobcli.o 00:03:15.891 CXX test/cpp_headers/blob_bdev.o 00:03:15.891 LINK idxd_perf 00:03:15.891 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:15.891 LINK memory_ut 00:03:15.891 LINK iscsi_tgt 00:03:15.891 LINK hello_blob 00:03:16.149 LINK hello_world 00:03:16.149 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.149 CXX test/cpp_headers/blobfs.o 00:03:16.149 CC test/event/event_perf/event_perf.o 00:03:16.149 LINK accel_perf 00:03:16.149 CXX test/cpp_headers/blob.o 00:03:16.149 LINK hello_fsdev 00:03:16.149 CC examples/nvme/reconnect/reconnect.o 00:03:16.149 LINK event_perf 00:03:16.149 CXX test/cpp_headers/conf.o 00:03:16.149 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.414 CC app/spdk_tgt/spdk_tgt.o 00:03:16.414 CC test/event/reactor/reactor.o 00:03:16.414 LINK blobcli 00:03:16.414 CC test/event/reactor_perf/reactor_perf.o 00:03:16.414 CXX test/cpp_headers/config.o 00:03:16.414 CXX test/cpp_headers/cpuset.o 00:03:16.414 LINK reactor 00:03:16.414 LINK spdk_tgt 00:03:16.414 CC test/event/app_repeat/app_repeat.o 00:03:16.414 LINK iscsi_fuzz 00:03:16.414 LINK reactor_perf 00:03:16.414 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.672 LINK app_repeat 00:03:16.672 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.672 CXX test/cpp_headers/crc16.o 00:03:16.672 LINK reconnect 00:03:16.672 CC examples/nvme/arbitration/arbitration.o 00:03:16.672 LINK nvme_manage 00:03:16.672 CC app/spdk_lspci/spdk_lspci.o 00:03:16.672 CXX test/cpp_headers/crc32.o 00:03:16.672 LINK hello_bdev 00:03:16.672 CC app/spdk_nvme_perf/perf.o 00:03:16.672 CC test/app/histogram_perf/histogram_perf.o 00:03:16.672 CC test/app/jsoncat/jsoncat.o 00:03:16.672 CC test/event/scheduler/scheduler.o 00:03:16.672 LINK spdk_lspci 00:03:16.672 CXX test/cpp_headers/crc64.o 00:03:16.931 CXX test/cpp_headers/dif.o 00:03:16.931 LINK histogram_perf 00:03:16.931 LINK arbitration 00:03:16.931 CXX test/cpp_headers/dma.o 00:03:16.931 LINK jsoncat 00:03:16.931 CC test/nvme/aer/aer.o 00:03:16.931 LINK scheduler 00:03:16.931 CC app/spdk_nvme_identify/identify.o 00:03:16.931 CXX test/cpp_headers/endian.o 00:03:16.931 CC examples/nvme/hotplug/hotplug.o 00:03:16.931 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.931 CC examples/nvme/abort/abort.o 00:03:16.931 CC test/app/stub/stub.o 00:03:16.931 CXX test/cpp_headers/env_dpdk.o 00:03:17.189 LINK cmb_copy 00:03:17.189 CC test/nvme/reset/reset.o 00:03:17.189 LINK hotplug 00:03:17.189 LINK stub 00:03:17.189 LINK aer 00:03:17.189 CXX test/cpp_headers/env.o 00:03:17.189 CXX test/cpp_headers/event.o 00:03:17.189 CXX test/cpp_headers/fd_group.o 00:03:17.189 LINK abort 00:03:17.447 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.447 CXX test/cpp_headers/fd.o 00:03:17.447 LINK bdevperf 00:03:17.447 CC test/nvme/sgl/sgl.o 00:03:17.447 CC test/nvme/e2edp/nvme_dp.o 00:03:17.447 LINK reset 00:03:17.447 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.447 CC test/nvme/overhead/overhead.o 00:03:17.447 CXX test/cpp_headers/file.o 00:03:17.447 CXX test/cpp_headers/fsdev.o 00:03:17.447 CXX test/cpp_headers/fsdev_module.o 00:03:17.447 LINK pmr_persistence 00:03:17.447 LINK spdk_nvme_perf 00:03:17.447 LINK nvme_dp 00:03:17.706 LINK sgl 00:03:17.706 CXX test/cpp_headers/ftl.o 00:03:17.706 LINK spdk_nvme_discover 00:03:17.706 CC app/spdk_top/spdk_top.o 00:03:17.706 LINK overhead 00:03:17.706 CC test/nvme/err_injection/err_injection.o 00:03:17.706 CXX test/cpp_headers/fuse_dispatcher.o 00:03:17.706 CC app/vhost/vhost.o 00:03:17.706 CC test/nvme/startup/startup.o 00:03:17.706 LINK spdk_nvme_identify 00:03:17.706 CC examples/nvmf/nvmf/nvmf.o 00:03:17.706 LINK err_injection 00:03:17.965 CC test/nvme/reserve/reserve.o 00:03:17.965 CC app/spdk_dd/spdk_dd.o 00:03:17.965 CC test/accel/dif/dif.o 00:03:17.965 CXX test/cpp_headers/gpt_spec.o 00:03:17.965 LINK vhost 00:03:17.965 CXX test/cpp_headers/hexlify.o 00:03:17.965 LINK startup 00:03:17.965 CC app/fio/nvme/fio_plugin.o 00:03:17.965 LINK reserve 00:03:17.965 CXX test/cpp_headers/histogram_data.o 00:03:17.965 LINK nvmf 00:03:18.222 CC test/nvme/simple_copy/simple_copy.o 00:03:18.222 CC test/nvme/connect_stress/connect_stress.o 00:03:18.222 CC app/fio/bdev/fio_plugin.o 00:03:18.222 CXX test/cpp_headers/idxd.o 00:03:18.222 LINK spdk_dd 00:03:18.222 CXX test/cpp_headers/idxd_spec.o 00:03:18.222 CXX test/cpp_headers/init.o 00:03:18.222 LINK connect_stress 00:03:18.222 CC test/blobfs/mkfs/mkfs.o 00:03:18.222 LINK simple_copy 00:03:18.222 CXX test/cpp_headers/ioat.o 00:03:18.222 CXX test/cpp_headers/ioat_spec.o 00:03:18.481 CXX test/cpp_headers/iscsi_spec.o 00:03:18.481 LINK dif 00:03:18.481 LINK spdk_nvme 00:03:18.481 LINK mkfs 00:03:18.481 CC test/nvme/boot_partition/boot_partition.o 00:03:18.481 CXX test/cpp_headers/json.o 00:03:18.481 CC test/nvme/compliance/nvme_compliance.o 00:03:18.481 CXX test/cpp_headers/jsonrpc.o 00:03:18.481 LINK spdk_top 00:03:18.481 LINK boot_partition 00:03:18.481 LINK spdk_bdev 00:03:18.481 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.481 CXX test/cpp_headers/keyring.o 00:03:18.481 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.739 CC test/nvme/fdp/fdp.o 00:03:18.739 CC test/nvme/cuse/cuse.o 00:03:18.739 CXX test/cpp_headers/keyring_module.o 00:03:18.739 CC test/lvol/esnap/esnap.o 00:03:18.739 CXX test/cpp_headers/likely.o 00:03:18.739 CXX test/cpp_headers/log.o 00:03:18.739 LINK nvme_compliance 00:03:18.739 LINK doorbell_aers 00:03:18.739 CXX test/cpp_headers/lvol.o 00:03:18.739 CXX test/cpp_headers/md5.o 00:03:18.739 LINK fused_ordering 00:03:18.739 CC test/bdev/bdevio/bdevio.o 00:03:18.739 CXX test/cpp_headers/memory.o 00:03:18.998 LINK fdp 00:03:18.998 CXX test/cpp_headers/mmio.o 00:03:18.998 CXX test/cpp_headers/nbd.o 00:03:18.998 CXX test/cpp_headers/net.o 00:03:18.998 CXX test/cpp_headers/notify.o 00:03:18.998 CXX test/cpp_headers/nvme.o 00:03:18.998 CXX test/cpp_headers/nvme_intel.o 00:03:18.998 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.998 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.998 CXX test/cpp_headers/nvme_spec.o 00:03:18.998 CXX test/cpp_headers/nvme_zns.o 00:03:18.998 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.998 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.998 CXX test/cpp_headers/nvmf.o 00:03:18.998 CXX test/cpp_headers/nvmf_spec.o 00:03:19.257 CXX test/cpp_headers/nvmf_transport.o 00:03:19.257 CXX test/cpp_headers/opal.o 00:03:19.257 CXX test/cpp_headers/opal_spec.o 00:03:19.257 LINK bdevio 00:03:19.257 CXX test/cpp_headers/pci_ids.o 00:03:19.257 CXX test/cpp_headers/pipe.o 00:03:19.257 CXX test/cpp_headers/queue.o 00:03:19.257 CXX test/cpp_headers/reduce.o 00:03:19.257 CXX test/cpp_headers/rpc.o 00:03:19.257 CXX test/cpp_headers/scheduler.o 00:03:19.257 CXX test/cpp_headers/scsi.o 00:03:19.257 CXX test/cpp_headers/scsi_spec.o 00:03:19.257 CXX test/cpp_headers/sock.o 00:03:19.257 CXX test/cpp_headers/stdinc.o 00:03:19.257 CXX test/cpp_headers/string.o 00:03:19.257 CXX test/cpp_headers/thread.o 00:03:19.257 CXX test/cpp_headers/trace.o 00:03:19.516 CXX test/cpp_headers/trace_parser.o 00:03:19.516 CXX test/cpp_headers/tree.o 00:03:19.516 CXX test/cpp_headers/ublk.o 00:03:19.516 CXX test/cpp_headers/util.o 00:03:19.516 CXX test/cpp_headers/uuid.o 00:03:19.516 CXX test/cpp_headers/version.o 00:03:19.516 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.516 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.516 CXX test/cpp_headers/vhost.o 00:03:19.516 CXX test/cpp_headers/vmd.o 00:03:19.516 CXX test/cpp_headers/xor.o 00:03:19.516 CXX test/cpp_headers/zipf.o 00:03:19.516 LINK cuse 00:03:23.768 LINK esnap 00:03:23.768 00:03:23.768 real 1m4.373s 00:03:23.768 user 5m51.447s 00:03:23.768 sys 1m3.776s 00:03:23.768 09:36:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:23.768 09:36:02 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.768 ************************************ 00:03:23.768 END TEST make 00:03:23.768 ************************************ 00:03:23.768 09:36:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:23.768 09:36:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:23.768 09:36:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:23.768 09:36:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.768 09:36:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:23.768 09:36:02 -- pm/common@44 -- $ pid=5078 00:03:23.768 09:36:02 -- pm/common@50 -- $ kill -TERM 5078 00:03:23.768 09:36:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.768 09:36:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:23.768 09:36:02 -- pm/common@44 -- $ pid=5079 00:03:23.768 09:36:02 -- pm/common@50 -- $ kill -TERM 5079 00:03:23.768 09:36:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:23.768 09:36:02 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.768 09:36:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:23.768 09:36:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:23.768 09:36:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:23.768 09:36:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:23.768 09:36:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:23.768 09:36:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:23.768 09:36:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:23.768 09:36:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:23.768 09:36:02 -- scripts/common.sh@336 -- # read -ra ver1 00:03:23.768 09:36:02 -- scripts/common.sh@337 -- # IFS=.-: 00:03:23.768 09:36:02 -- scripts/common.sh@337 -- # read -ra ver2 00:03:23.768 09:36:02 -- scripts/common.sh@338 -- # local 'op=<' 00:03:23.768 09:36:02 -- scripts/common.sh@340 -- # ver1_l=2 00:03:23.768 09:36:02 -- scripts/common.sh@341 -- # ver2_l=1 00:03:23.768 09:36:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:23.768 09:36:02 -- scripts/common.sh@344 -- # case "$op" in 00:03:23.768 09:36:02 -- scripts/common.sh@345 -- # : 1 00:03:23.768 09:36:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:23.768 09:36:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.768 09:36:02 -- scripts/common.sh@365 -- # decimal 1 00:03:23.768 09:36:02 -- scripts/common.sh@353 -- # local d=1 00:03:23.768 09:36:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:23.768 09:36:02 -- scripts/common.sh@355 -- # echo 1 00:03:23.768 09:36:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:23.768 09:36:02 -- scripts/common.sh@366 -- # decimal 2 00:03:23.768 09:36:02 -- scripts/common.sh@353 -- # local d=2 00:03:23.768 09:36:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:23.768 09:36:02 -- scripts/common.sh@355 -- # echo 2 00:03:23.768 09:36:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:23.768 09:36:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:23.768 09:36:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:23.768 09:36:02 -- scripts/common.sh@368 -- # return 0 00:03:23.768 09:36:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:23.768 09:36:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:23.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.768 --rc genhtml_branch_coverage=1 00:03:23.768 --rc genhtml_function_coverage=1 00:03:23.768 --rc genhtml_legend=1 00:03:23.768 --rc geninfo_all_blocks=1 00:03:23.768 --rc geninfo_unexecuted_blocks=1 00:03:23.768 00:03:23.768 ' 00:03:23.768 09:36:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:23.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.768 --rc genhtml_branch_coverage=1 00:03:23.768 --rc genhtml_function_coverage=1 00:03:23.768 --rc genhtml_legend=1 00:03:23.768 --rc geninfo_all_blocks=1 00:03:23.768 --rc geninfo_unexecuted_blocks=1 00:03:23.768 00:03:23.768 ' 00:03:23.768 09:36:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:23.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.768 --rc genhtml_branch_coverage=1 00:03:23.768 --rc genhtml_function_coverage=1 00:03:23.768 --rc genhtml_legend=1 00:03:23.768 --rc geninfo_all_blocks=1 00:03:23.768 --rc geninfo_unexecuted_blocks=1 00:03:23.768 00:03:23.768 ' 00:03:23.768 09:36:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:23.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.768 --rc genhtml_branch_coverage=1 00:03:23.768 --rc genhtml_function_coverage=1 00:03:23.768 --rc genhtml_legend=1 00:03:23.768 --rc geninfo_all_blocks=1 00:03:23.768 --rc geninfo_unexecuted_blocks=1 00:03:23.768 00:03:23.768 ' 00:03:23.768 09:36:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:23.768 09:36:02 -- nvmf/common.sh@7 -- # uname -s 00:03:23.768 09:36:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:23.768 09:36:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:23.768 09:36:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:23.768 09:36:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:23.768 09:36:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:23.768 09:36:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:23.768 09:36:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:23.768 09:36:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:23.768 09:36:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:23.768 09:36:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:23.768 09:36:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1aad585-0614-4c54-866f-fbb91759721c 00:03:23.769 09:36:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=b1aad585-0614-4c54-866f-fbb91759721c 00:03:23.769 09:36:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:23.769 09:36:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:23.769 09:36:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:23.769 09:36:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:23.769 09:36:02 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:23.769 09:36:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:23.769 09:36:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:23.769 09:36:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.769 09:36:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.769 09:36:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.769 09:36:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.769 09:36:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.769 09:36:02 -- paths/export.sh@5 -- # export PATH 00:03:23.769 09:36:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.769 09:36:02 -- nvmf/common.sh@51 -- # : 0 00:03:23.769 09:36:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:23.769 09:36:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:23.769 09:36:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:23.769 09:36:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:23.769 09:36:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:23.769 09:36:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:23.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:23.769 09:36:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:23.769 09:36:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:23.769 09:36:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:23.769 09:36:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:23.769 09:36:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:23.769 09:36:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:23.769 09:36:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:23.769 09:36:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.769 09:36:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:23.769 09:36:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.769 09:36:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.769 09:36:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.769 09:36:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.769 09:36:02 -- spdk/autotest.sh@48 -- # udevadm_pid=54231 00:03:23.769 09:36:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.769 09:36:02 -- pm/common@17 -- # local monitor 00:03:23.769 09:36:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.769 09:36:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.769 09:36:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.769 09:36:02 -- pm/common@25 -- # sleep 1 00:03:23.769 09:36:02 -- pm/common@21 -- # date +%s 00:03:23.769 09:36:02 -- pm/common@21 -- # date +%s 00:03:23.769 09:36:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732786562 00:03:23.769 09:36:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732786562 00:03:24.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732786562_collect-vmstat.pm.log 00:03:24.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732786562_collect-cpu-load.pm.log 00:03:24.969 09:36:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:24.969 09:36:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:24.969 09:36:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.969 09:36:03 -- common/autotest_common.sh@10 -- # set +x 00:03:24.969 09:36:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:24.969 09:36:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:24.969 09:36:03 -- common/autotest_common.sh@10 -- # set +x 00:03:24.969 09:36:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:24.969 09:36:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:24.969 09:36:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:24.969 09:36:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:24.969 09:36:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:24.969 09:36:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:24.969 09:36:03 -- common/autotest_common.sh@1457 -- # uname 00:03:24.969 09:36:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:24.969 09:36:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:24.969 09:36:03 -- common/autotest_common.sh@1477 -- # uname 00:03:24.969 09:36:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:24.969 09:36:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:24.969 09:36:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:24.969 lcov: LCOV version 1.15 00:03:24.969 09:36:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:39.867 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.867 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:54.778 09:36:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:54.778 09:36:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.778 09:36:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.778 09:36:33 -- spdk/autotest.sh@78 -- # rm -f 00:03:54.778 09:36:33 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.924 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:55.924 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:55.924 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:55.924 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:55.924 09:36:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:55.924 09:36:34 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:55.924 09:36:34 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:55.924 09:36:34 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:55.924 09:36:34 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:55.924 09:36:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:55.924 09:36:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 09:36:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:55.924 09:36:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.924 09:36:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.924 09:36:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:55.924 09:36:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:55.924 09:36:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.924 No valid GPT data, bailing 00:03:55.924 09:36:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.924 09:36:34 -- scripts/common.sh@394 -- # pt= 00:03:55.924 09:36:34 -- scripts/common.sh@395 -- # return 1 00:03:55.924 09:36:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.924 1+0 records in 00:03:55.924 1+0 records out 00:03:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273472 s, 38.3 MB/s 00:03:55.924 09:36:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.924 09:36:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.924 09:36:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:55.924 09:36:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:55.924 09:36:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.924 No valid GPT data, bailing 00:03:55.924 09:36:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.187 09:36:34 -- scripts/common.sh@394 -- # pt= 00:03:56.187 09:36:34 -- scripts/common.sh@395 -- # return 1 00:03:56.187 09:36:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:56.187 1+0 records in 00:03:56.187 1+0 records out 00:03:56.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525739 s, 199 MB/s 00:03:56.187 09:36:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.187 09:36:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.187 09:36:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:56.187 09:36:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:56.187 09:36:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:56.187 No valid GPT data, bailing 00:03:56.187 09:36:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:56.187 09:36:34 -- scripts/common.sh@394 -- # pt= 00:03:56.187 09:36:34 -- scripts/common.sh@395 -- # return 1 00:03:56.187 09:36:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:56.187 1+0 records in 00:03:56.187 1+0 records out 00:03:56.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618189 s, 170 MB/s 00:03:56.187 09:36:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.187 09:36:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.187 09:36:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:56.187 09:36:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:56.187 09:36:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:56.187 No valid GPT data, bailing 00:03:56.187 09:36:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:56.187 09:36:34 -- scripts/common.sh@394 -- # pt= 00:03:56.187 09:36:34 -- scripts/common.sh@395 -- # return 1 00:03:56.187 09:36:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:56.187 1+0 records in 00:03:56.187 1+0 records out 00:03:56.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005343 s, 196 MB/s 00:03:56.187 09:36:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.187 09:36:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.187 09:36:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:56.187 09:36:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:56.187 09:36:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:56.187 No valid GPT data, bailing 00:03:56.187 09:36:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:56.447 09:36:35 -- scripts/common.sh@394 -- # pt= 00:03:56.447 09:36:35 -- scripts/common.sh@395 -- # return 1 00:03:56.447 09:36:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:56.447 1+0 records in 00:03:56.447 1+0 records out 00:03:56.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472988 s, 222 MB/s 00:03:56.447 09:36:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.447 09:36:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.447 09:36:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:56.447 09:36:35 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:56.447 09:36:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:56.447 No valid GPT data, bailing 00:03:56.447 09:36:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:56.447 09:36:35 -- scripts/common.sh@394 -- # pt= 00:03:56.447 09:36:35 -- scripts/common.sh@395 -- # return 1 00:03:56.447 09:36:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:56.447 1+0 records in 00:03:56.447 1+0 records out 00:03:56.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622685 s, 168 MB/s 00:03:56.447 09:36:35 -- spdk/autotest.sh@105 -- # sync 00:03:56.447 09:36:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.447 09:36:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.447 09:36:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.364 09:36:36 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.364 09:36:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.364 09:36:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.364 09:36:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.201 Hugepages 00:03:59.201 node hugesize free / total 00:03:59.201 node0 1048576kB 0 / 0 00:03:59.201 node0 2048kB 0 / 0 00:03:59.201 00:03:59.201 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.201 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.201 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:59.201 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:59.201 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:59.462 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:59.462 09:36:38 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.462 09:36:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.462 09:36:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.462 09:36:38 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.296 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.296 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.296 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.558 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.558 09:36:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.502 09:36:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.502 09:36:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.502 09:36:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.502 09:36:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:01.502 09:36:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.502 09:36:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.502 09:36:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.502 09:36:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.502 09:36:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.502 09:36:40 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:01.502 09:36:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:01.502 09:36:40 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.027 Waiting for block devices as requested 00:04:02.027 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.289 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.289 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.289 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.665 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:07.665 09:36:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:07.665 09:36:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:07.665 09:36:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.665 09:36:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1543 -- # continue 00:04:07.665 09:36:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.665 09:36:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1543 -- # continue 00:04:07.665 09:36:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.665 09:36:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1543 -- # continue 00:04:07.665 09:36:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.665 09:36:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.665 09:36:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.665 09:36:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.665 09:36:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:07.666 09:36:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.666 09:36:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.666 09:36:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.666 09:36:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.666 09:36:46 -- common/autotest_common.sh@1543 -- # continue 00:04:07.666 09:36:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:07.666 09:36:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.666 09:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.666 09:36:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:07.666 09:36:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.666 09:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.666 09:36:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.494 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.754 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.754 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.754 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.754 09:36:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.754 09:36:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.754 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:08.754 09:36:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.754 09:36:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:08.754 09:36:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.754 09:36:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:08.754 09:36:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:08.754 09:36:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:08.754 09:36:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.754 09:36:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:08.754 09:36:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.754 09:36:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.754 09:36:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.754 09:36:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.754 09:36:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.754 09:36:47 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:08.754 09:36:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:08.755 09:36:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.755 09:36:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:09.015 09:36:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:09.015 09:36:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:09.015 09:36:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:09.015 09:36:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:09.015 09:36:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:09.015 09:36:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:09.015 09:36:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:09.015 09:36:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:09.015 09:36:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:09.015 09:36:47 -- common/autotest_common.sh@1572 -- # return 0 00:04:09.015 09:36:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:09.015 09:36:47 -- common/autotest_common.sh@1580 -- # return 0 00:04:09.015 09:36:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:09.015 09:36:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:09.015 09:36:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:09.015 09:36:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:09.015 09:36:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:09.015 09:36:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.015 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 09:36:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:09.015 09:36:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:09.015 09:36:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.015 09:36:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.015 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 ************************************ 00:04:09.015 START TEST env 00:04:09.015 ************************************ 00:04:09.015 09:36:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:09.015 * Looking for test storage... 00:04:09.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:09.015 09:36:47 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.015 09:36:47 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.015 09:36:47 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.015 09:36:47 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.015 09:36:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.015 09:36:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.015 09:36:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.015 09:36:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.015 09:36:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.015 09:36:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.015 09:36:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.015 09:36:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.016 09:36:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.016 09:36:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.016 09:36:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.016 09:36:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:09.016 09:36:47 env -- scripts/common.sh@345 -- # : 1 00:04:09.016 09:36:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.016 09:36:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.016 09:36:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:09.016 09:36:47 env -- scripts/common.sh@353 -- # local d=1 00:04:09.016 09:36:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.016 09:36:47 env -- scripts/common.sh@355 -- # echo 1 00:04:09.016 09:36:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.016 09:36:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:09.016 09:36:47 env -- scripts/common.sh@353 -- # local d=2 00:04:09.016 09:36:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.016 09:36:47 env -- scripts/common.sh@355 -- # echo 2 00:04:09.016 09:36:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.016 09:36:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.016 09:36:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.016 09:36:47 env -- scripts/common.sh@368 -- # return 0 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.016 --rc genhtml_branch_coverage=1 00:04:09.016 --rc genhtml_function_coverage=1 00:04:09.016 --rc genhtml_legend=1 00:04:09.016 --rc geninfo_all_blocks=1 00:04:09.016 --rc geninfo_unexecuted_blocks=1 00:04:09.016 00:04:09.016 ' 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.016 --rc genhtml_branch_coverage=1 00:04:09.016 --rc genhtml_function_coverage=1 00:04:09.016 --rc genhtml_legend=1 00:04:09.016 --rc geninfo_all_blocks=1 00:04:09.016 --rc geninfo_unexecuted_blocks=1 00:04:09.016 00:04:09.016 ' 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.016 --rc genhtml_branch_coverage=1 00:04:09.016 --rc genhtml_function_coverage=1 00:04:09.016 --rc genhtml_legend=1 00:04:09.016 --rc geninfo_all_blocks=1 00:04:09.016 --rc geninfo_unexecuted_blocks=1 00:04:09.016 00:04:09.016 ' 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.016 --rc genhtml_branch_coverage=1 00:04:09.016 --rc genhtml_function_coverage=1 00:04:09.016 --rc genhtml_legend=1 00:04:09.016 --rc geninfo_all_blocks=1 00:04:09.016 --rc geninfo_unexecuted_blocks=1 00:04:09.016 00:04:09.016 ' 00:04:09.016 09:36:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.016 09:36:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.016 09:36:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.016 ************************************ 00:04:09.016 START TEST env_memory 00:04:09.016 ************************************ 00:04:09.016 09:36:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:09.016 00:04:09.016 00:04:09.016 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.016 http://cunit.sourceforge.net/ 00:04:09.016 00:04:09.016 00:04:09.016 Suite: memory 00:04:09.277 Test: alloc and free memory map ...[2024-11-28 09:36:47.905630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:09.277 passed 00:04:09.277 Test: mem map translation ...[2024-11-28 09:36:47.944761] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:09.277 [2024-11-28 09:36:47.944914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:09.277 [2024-11-28 09:36:47.945036] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:09.277 [2024-11-28 09:36:47.945077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:09.277 passed 00:04:09.277 Test: mem map registration ...[2024-11-28 09:36:48.013541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:09.277 [2024-11-28 09:36:48.013682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:09.277 passed 00:04:09.277 Test: mem map adjacent registrations ...passed 00:04:09.277 00:04:09.277 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.277 suites 1 1 n/a 0 0 00:04:09.277 tests 4 4 4 0 0 00:04:09.277 asserts 152 152 152 0 n/a 00:04:09.277 00:04:09.277 Elapsed time = 0.233 seconds 00:04:09.277 00:04:09.277 real 0m0.269s 00:04:09.277 user 0m0.239s 00:04:09.277 sys 0m0.021s 00:04:09.277 ************************************ 00:04:09.277 END TEST env_memory 00:04:09.277 ************************************ 00:04:09.277 09:36:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.277 09:36:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:09.538 09:36:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:09.538 09:36:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.538 09:36:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.538 09:36:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.538 ************************************ 00:04:09.538 START TEST env_vtophys 00:04:09.538 ************************************ 00:04:09.538 09:36:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:09.538 EAL: lib.eal log level changed from notice to debug 00:04:09.538 EAL: Detected lcore 0 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 1 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 2 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 3 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 4 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 5 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 6 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 7 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 8 as core 0 on socket 0 00:04:09.538 EAL: Detected lcore 9 as core 0 on socket 0 00:04:09.538 EAL: Maximum logical cores by configuration: 128 00:04:09.538 EAL: Detected CPU lcores: 10 00:04:09.538 EAL: Detected NUMA nodes: 1 00:04:09.538 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:09.538 EAL: Detected shared linkage of DPDK 00:04:09.538 EAL: No shared files mode enabled, IPC will be disabled 00:04:09.538 EAL: Selected IOVA mode 'PA' 00:04:09.538 EAL: Probing VFIO support... 00:04:09.538 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.538 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:09.538 EAL: Ask a virtual area of 0x2e000 bytes 00:04:09.538 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:09.538 EAL: Setting up physically contiguous memory... 00:04:09.538 EAL: Setting maximum number of open files to 524288 00:04:09.538 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:09.538 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:09.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.538 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:09.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.538 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:09.538 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:09.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.538 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:09.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.538 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:09.538 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:09.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.538 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:09.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.538 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:09.538 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:09.538 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.538 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:09.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.538 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.538 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:09.538 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:09.538 EAL: Hugepages will be freed exactly as allocated. 00:04:09.538 EAL: No shared files mode enabled, IPC is disabled 00:04:09.538 EAL: No shared files mode enabled, IPC is disabled 00:04:09.538 EAL: TSC frequency is ~2600000 KHz 00:04:09.538 EAL: Main lcore 0 is ready (tid=7fe0328cea40;cpuset=[0]) 00:04:09.538 EAL: Trying to obtain current memory policy. 00:04:09.538 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.538 EAL: Restoring previous memory policy: 0 00:04:09.538 EAL: request: mp_malloc_sync 00:04:09.538 EAL: No shared files mode enabled, IPC is disabled 00:04:09.538 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.538 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.538 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.539 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.539 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:09.539 00:04:09.539 00:04:09.539 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.539 http://cunit.sourceforge.net/ 00:04:09.539 00:04:09.539 00:04:09.539 Suite: components_suite 00:04:10.109 Test: vtophys_malloc_test ...passed 00:04:10.109 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.109 EAL: Restoring previous memory policy: 4 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was expanded by 4MB 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was shrunk by 4MB 00:04:10.109 EAL: Trying to obtain current memory policy. 00:04:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.109 EAL: Restoring previous memory policy: 4 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was expanded by 6MB 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was shrunk by 6MB 00:04:10.109 EAL: Trying to obtain current memory policy. 00:04:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.109 EAL: Restoring previous memory policy: 4 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was expanded by 10MB 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was shrunk by 10MB 00:04:10.109 EAL: Trying to obtain current memory policy. 00:04:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.109 EAL: Restoring previous memory policy: 4 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was expanded by 18MB 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was shrunk by 18MB 00:04:10.109 EAL: Trying to obtain current memory policy. 00:04:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.109 EAL: Restoring previous memory policy: 4 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was expanded by 34MB 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was shrunk by 34MB 00:04:10.109 EAL: Trying to obtain current memory policy. 00:04:10.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.109 EAL: Restoring previous memory policy: 4 00:04:10.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.109 EAL: request: mp_malloc_sync 00:04:10.109 EAL: No shared files mode enabled, IPC is disabled 00:04:10.109 EAL: Heap on socket 0 was expanded by 66MB 00:04:10.370 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.370 EAL: request: mp_malloc_sync 00:04:10.370 EAL: No shared files mode enabled, IPC is disabled 00:04:10.370 EAL: Heap on socket 0 was shrunk by 66MB 00:04:10.370 EAL: Trying to obtain current memory policy. 00:04:10.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.370 EAL: Restoring previous memory policy: 4 00:04:10.370 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.370 EAL: request: mp_malloc_sync 00:04:10.370 EAL: No shared files mode enabled, IPC is disabled 00:04:10.370 EAL: Heap on socket 0 was expanded by 130MB 00:04:10.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.630 EAL: request: mp_malloc_sync 00:04:10.630 EAL: No shared files mode enabled, IPC is disabled 00:04:10.630 EAL: Heap on socket 0 was shrunk by 130MB 00:04:10.630 EAL: Trying to obtain current memory policy. 00:04:10.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.630 EAL: Restoring previous memory policy: 4 00:04:10.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.630 EAL: request: mp_malloc_sync 00:04:10.630 EAL: No shared files mode enabled, IPC is disabled 00:04:10.630 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.150 EAL: request: mp_malloc_sync 00:04:11.150 EAL: No shared files mode enabled, IPC is disabled 00:04:11.150 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.411 EAL: Trying to obtain current memory policy. 00:04:11.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.411 EAL: Restoring previous memory policy: 4 00:04:11.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.411 EAL: request: mp_malloc_sync 00:04:11.411 EAL: No shared files mode enabled, IPC is disabled 00:04:11.411 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.982 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.982 EAL: request: mp_malloc_sync 00:04:11.982 EAL: No shared files mode enabled, IPC is disabled 00:04:11.982 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.549 EAL: Trying to obtain current memory policy. 00:04:12.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.549 EAL: Restoring previous memory policy: 4 00:04:12.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.549 EAL: request: mp_malloc_sync 00:04:12.549 EAL: No shared files mode enabled, IPC is disabled 00:04:12.549 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.515 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.515 EAL: request: mp_malloc_sync 00:04:13.515 EAL: No shared files mode enabled, IPC is disabled 00:04:13.515 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.451 passed 00:04:14.451 00:04:14.451 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.451 suites 1 1 n/a 0 0 00:04:14.451 tests 2 2 2 0 0 00:04:14.451 asserts 5817 5817 5817 0 n/a 00:04:14.451 00:04:14.451 Elapsed time = 4.670 seconds 00:04:14.451 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.451 EAL: request: mp_malloc_sync 00:04:14.451 EAL: No shared files mode enabled, IPC is disabled 00:04:14.451 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.451 EAL: No shared files mode enabled, IPC is disabled 00:04:14.451 EAL: No shared files mode enabled, IPC is disabled 00:04:14.451 EAL: No shared files mode enabled, IPC is disabled 00:04:14.451 ************************************ 00:04:14.451 END TEST env_vtophys 00:04:14.451 ************************************ 00:04:14.451 00:04:14.451 real 0m4.949s 00:04:14.451 user 0m4.012s 00:04:14.451 sys 0m0.786s 00:04:14.451 09:36:53 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.451 09:36:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.451 09:36:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.451 09:36:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.451 09:36:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.451 09:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.451 ************************************ 00:04:14.451 START TEST env_pci 00:04:14.451 ************************************ 00:04:14.451 09:36:53 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.451 00:04:14.451 00:04:14.451 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.451 http://cunit.sourceforge.net/ 00:04:14.451 00:04:14.451 00:04:14.451 Suite: pci 00:04:14.451 Test: pci_hook ...[2024-11-28 09:36:53.216105] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57002 has claimed it 00:04:14.451 passed 00:04:14.451 00:04:14.451 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.451 suites 1 1 n/a 0 0 00:04:14.451 tests 1 1 1 0 0 00:04:14.451 asserts 25 25 25 0 n/a 00:04:14.451 00:04:14.452 Elapsed time = 0.003 seconds 00:04:14.452 EAL: Cannot find device (10000:00:01.0) 00:04:14.452 EAL: Failed to attach device on primary process 00:04:14.452 00:04:14.452 real 0m0.057s 00:04:14.452 user 0m0.025s 00:04:14.452 sys 0m0.031s 00:04:14.452 ************************************ 00:04:14.452 END TEST env_pci 00:04:14.452 ************************************ 00:04:14.452 09:36:53 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.452 09:36:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.452 09:36:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.452 09:36:53 env -- env/env.sh@15 -- # uname 00:04:14.452 09:36:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.452 09:36:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.452 09:36:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.452 09:36:53 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:14.452 09:36:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.452 09:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.452 ************************************ 00:04:14.452 START TEST env_dpdk_post_init 00:04:14.452 ************************************ 00:04:14.452 09:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.712 EAL: Detected CPU lcores: 10 00:04:14.712 EAL: Detected NUMA nodes: 1 00:04:14.712 EAL: Detected shared linkage of DPDK 00:04:14.712 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.712 EAL: Selected IOVA mode 'PA' 00:04:14.712 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:14.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:14.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:14.712 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:14.712 Starting DPDK initialization... 00:04:14.712 Starting SPDK post initialization... 00:04:14.712 SPDK NVMe probe 00:04:14.712 Attaching to 0000:00:10.0 00:04:14.712 Attaching to 0000:00:11.0 00:04:14.712 Attaching to 0000:00:12.0 00:04:14.712 Attaching to 0000:00:13.0 00:04:14.712 Attached to 0000:00:13.0 00:04:14.712 Attached to 0000:00:10.0 00:04:14.712 Attached to 0000:00:11.0 00:04:14.712 Attached to 0000:00:12.0 00:04:14.712 Cleaning up... 00:04:14.712 00:04:14.712 real 0m0.261s 00:04:14.712 user 0m0.079s 00:04:14.712 sys 0m0.084s 00:04:14.712 ************************************ 00:04:14.712 END TEST env_dpdk_post_init 00:04:14.712 ************************************ 00:04:14.712 09:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.712 09:36:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.973 09:36:53 env -- env/env.sh@26 -- # uname 00:04:14.973 09:36:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.973 09:36:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.973 09:36:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.973 09:36:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.973 09:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.973 ************************************ 00:04:14.973 START TEST env_mem_callbacks 00:04:14.973 ************************************ 00:04:14.973 09:36:53 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.973 EAL: Detected CPU lcores: 10 00:04:14.973 EAL: Detected NUMA nodes: 1 00:04:14.973 EAL: Detected shared linkage of DPDK 00:04:14.973 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.973 EAL: Selected IOVA mode 'PA' 00:04:14.973 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.973 00:04:14.973 00:04:14.973 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.973 http://cunit.sourceforge.net/ 00:04:14.973 00:04:14.973 00:04:14.973 Suite: memory 00:04:14.973 Test: test ... 00:04:14.973 register 0x200000200000 2097152 00:04:14.973 malloc 3145728 00:04:14.973 register 0x200000400000 4194304 00:04:14.973 buf 0x2000004fffc0 len 3145728 PASSED 00:04:14.973 malloc 64 00:04:14.973 buf 0x2000004ffec0 len 64 PASSED 00:04:14.973 malloc 4194304 00:04:14.973 register 0x200000800000 6291456 00:04:14.973 buf 0x2000009fffc0 len 4194304 PASSED 00:04:14.973 free 0x2000004fffc0 3145728 00:04:14.973 free 0x2000004ffec0 64 00:04:14.973 unregister 0x200000400000 4194304 PASSED 00:04:14.973 free 0x2000009fffc0 4194304 00:04:14.973 unregister 0x200000800000 6291456 PASSED 00:04:14.973 malloc 8388608 00:04:14.973 register 0x200000400000 10485760 00:04:14.973 buf 0x2000005fffc0 len 8388608 PASSED 00:04:14.973 free 0x2000005fffc0 8388608 00:04:14.973 unregister 0x200000400000 10485760 PASSED 00:04:14.973 passed 00:04:14.973 00:04:14.973 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.973 suites 1 1 n/a 0 0 00:04:14.973 tests 1 1 1 0 0 00:04:14.973 asserts 15 15 15 0 n/a 00:04:14.973 00:04:14.973 Elapsed time = 0.046 seconds 00:04:15.235 00:04:15.235 real 0m0.224s 00:04:15.235 user 0m0.061s 00:04:15.235 sys 0m0.057s 00:04:15.235 09:36:53 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.235 ************************************ 00:04:15.235 END TEST env_mem_callbacks 00:04:15.235 ************************************ 00:04:15.235 09:36:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.235 ************************************ 00:04:15.235 END TEST env 00:04:15.235 ************************************ 00:04:15.235 00:04:15.235 real 0m6.224s 00:04:15.235 user 0m4.573s 00:04:15.235 sys 0m1.200s 00:04:15.235 09:36:53 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.235 09:36:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.235 09:36:53 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.235 09:36:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.235 09:36:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.235 09:36:53 -- common/autotest_common.sh@10 -- # set +x 00:04:15.235 ************************************ 00:04:15.235 START TEST rpc 00:04:15.235 ************************************ 00:04:15.235 09:36:53 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.235 * Looking for test storage... 00:04:15.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.235 09:36:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.235 09:36:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.235 09:36:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.235 09:36:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.235 09:36:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.235 09:36:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.235 09:36:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.235 09:36:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.235 09:36:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.235 09:36:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.235 09:36:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.235 09:36:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.235 09:36:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.235 09:36:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.235 09:36:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.235 --rc genhtml_branch_coverage=1 00:04:15.235 --rc genhtml_function_coverage=1 00:04:15.235 --rc genhtml_legend=1 00:04:15.235 --rc geninfo_all_blocks=1 00:04:15.235 --rc geninfo_unexecuted_blocks=1 00:04:15.235 00:04:15.235 ' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.235 --rc genhtml_branch_coverage=1 00:04:15.235 --rc genhtml_function_coverage=1 00:04:15.235 --rc genhtml_legend=1 00:04:15.235 --rc geninfo_all_blocks=1 00:04:15.235 --rc geninfo_unexecuted_blocks=1 00:04:15.235 00:04:15.235 ' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.235 --rc genhtml_branch_coverage=1 00:04:15.235 --rc genhtml_function_coverage=1 00:04:15.235 --rc genhtml_legend=1 00:04:15.235 --rc geninfo_all_blocks=1 00:04:15.235 --rc geninfo_unexecuted_blocks=1 00:04:15.235 00:04:15.235 ' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.235 --rc genhtml_branch_coverage=1 00:04:15.235 --rc genhtml_function_coverage=1 00:04:15.235 --rc genhtml_legend=1 00:04:15.235 --rc geninfo_all_blocks=1 00:04:15.235 --rc geninfo_unexecuted_blocks=1 00:04:15.235 00:04:15.235 ' 00:04:15.235 09:36:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57124 00:04:15.235 09:36:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.235 09:36:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.235 09:36:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57124 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 57124 ']' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.235 09:36:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.496 [2024-11-28 09:36:54.203562] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:15.496 [2024-11-28 09:36:54.203710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57124 ] 00:04:15.496 [2024-11-28 09:36:54.365733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.756 [2024-11-28 09:36:54.473770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.756 [2024-11-28 09:36:54.473823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57124' to capture a snapshot of events at runtime. 00:04:15.756 [2024-11-28 09:36:54.473833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.756 [2024-11-28 09:36:54.473842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.756 [2024-11-28 09:36:54.473850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57124 for offline analysis/debug. 00:04:15.756 [2024-11-28 09:36:54.474719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.327 09:36:55 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.327 09:36:55 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.327 09:36:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.327 09:36:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.327 09:36:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.327 09:36:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.327 09:36:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.327 09:36:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.327 09:36:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.327 ************************************ 00:04:16.327 START TEST rpc_integrity 00:04:16.327 ************************************ 00:04:16.327 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.327 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.327 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.327 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.327 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.327 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.327 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.327 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.327 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.327 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.327 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.328 { 00:04:16.328 "name": "Malloc0", 00:04:16.328 "aliases": [ 00:04:16.328 "d1af2cc0-ab80-4bed-9773-9ef495f32627" 00:04:16.328 ], 00:04:16.328 "product_name": "Malloc disk", 00:04:16.328 "block_size": 512, 00:04:16.328 "num_blocks": 16384, 00:04:16.328 "uuid": "d1af2cc0-ab80-4bed-9773-9ef495f32627", 00:04:16.328 "assigned_rate_limits": { 00:04:16.328 "rw_ios_per_sec": 0, 00:04:16.328 "rw_mbytes_per_sec": 0, 00:04:16.328 "r_mbytes_per_sec": 0, 00:04:16.328 "w_mbytes_per_sec": 0 00:04:16.328 }, 00:04:16.328 "claimed": false, 00:04:16.328 "zoned": false, 00:04:16.328 "supported_io_types": { 00:04:16.328 "read": true, 00:04:16.328 "write": true, 00:04:16.328 "unmap": true, 00:04:16.328 "flush": true, 00:04:16.328 "reset": true, 00:04:16.328 "nvme_admin": false, 00:04:16.328 "nvme_io": false, 00:04:16.328 "nvme_io_md": false, 00:04:16.328 "write_zeroes": true, 00:04:16.328 "zcopy": true, 00:04:16.328 "get_zone_info": false, 00:04:16.328 "zone_management": false, 00:04:16.328 "zone_append": false, 00:04:16.328 "compare": false, 00:04:16.328 "compare_and_write": false, 00:04:16.328 "abort": true, 00:04:16.328 "seek_hole": false, 00:04:16.328 "seek_data": false, 00:04:16.328 "copy": true, 00:04:16.328 "nvme_iov_md": false 00:04:16.328 }, 00:04:16.328 "memory_domains": [ 00:04:16.328 { 00:04:16.328 "dma_device_id": "system", 00:04:16.328 "dma_device_type": 1 00:04:16.328 }, 00:04:16.328 { 00:04:16.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.328 "dma_device_type": 2 00:04:16.328 } 00:04:16.328 ], 00:04:16.328 "driver_specific": {} 00:04:16.328 } 00:04:16.328 ]' 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.328 [2024-11-28 09:36:55.192590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.328 [2024-11-28 09:36:55.192644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.328 [2024-11-28 09:36:55.192668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:16.328 [2024-11-28 09:36:55.192678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.328 [2024-11-28 09:36:55.194823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.328 [2024-11-28 09:36:55.194865] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.328 Passthru0 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.328 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.328 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.588 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.588 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.588 { 00:04:16.588 "name": "Malloc0", 00:04:16.588 "aliases": [ 00:04:16.588 "d1af2cc0-ab80-4bed-9773-9ef495f32627" 00:04:16.588 ], 00:04:16.588 "product_name": "Malloc disk", 00:04:16.588 "block_size": 512, 00:04:16.588 "num_blocks": 16384, 00:04:16.588 "uuid": "d1af2cc0-ab80-4bed-9773-9ef495f32627", 00:04:16.588 "assigned_rate_limits": { 00:04:16.588 "rw_ios_per_sec": 0, 00:04:16.588 "rw_mbytes_per_sec": 0, 00:04:16.588 "r_mbytes_per_sec": 0, 00:04:16.588 "w_mbytes_per_sec": 0 00:04:16.588 }, 00:04:16.588 "claimed": true, 00:04:16.588 "claim_type": "exclusive_write", 00:04:16.588 "zoned": false, 00:04:16.588 "supported_io_types": { 00:04:16.588 "read": true, 00:04:16.588 "write": true, 00:04:16.588 "unmap": true, 00:04:16.588 "flush": true, 00:04:16.588 "reset": true, 00:04:16.588 "nvme_admin": false, 00:04:16.588 "nvme_io": false, 00:04:16.588 "nvme_io_md": false, 00:04:16.588 "write_zeroes": true, 00:04:16.588 "zcopy": true, 00:04:16.588 "get_zone_info": false, 00:04:16.589 "zone_management": false, 00:04:16.589 "zone_append": false, 00:04:16.589 "compare": false, 00:04:16.589 "compare_and_write": false, 00:04:16.589 "abort": true, 00:04:16.589 "seek_hole": false, 00:04:16.589 "seek_data": false, 00:04:16.589 "copy": true, 00:04:16.589 "nvme_iov_md": false 00:04:16.589 }, 00:04:16.589 "memory_domains": [ 00:04:16.589 { 00:04:16.589 "dma_device_id": "system", 00:04:16.589 "dma_device_type": 1 00:04:16.589 }, 00:04:16.589 { 00:04:16.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.589 "dma_device_type": 2 00:04:16.589 } 00:04:16.589 ], 00:04:16.589 "driver_specific": {} 00:04:16.589 }, 00:04:16.589 { 00:04:16.589 "name": "Passthru0", 00:04:16.589 "aliases": [ 00:04:16.589 "216b5607-2363-5f77-8e68-1ae35c5c8ab5" 00:04:16.589 ], 00:04:16.589 "product_name": "passthru", 00:04:16.589 "block_size": 512, 00:04:16.589 "num_blocks": 16384, 00:04:16.589 "uuid": "216b5607-2363-5f77-8e68-1ae35c5c8ab5", 00:04:16.589 "assigned_rate_limits": { 00:04:16.589 "rw_ios_per_sec": 0, 00:04:16.589 "rw_mbytes_per_sec": 0, 00:04:16.589 "r_mbytes_per_sec": 0, 00:04:16.589 "w_mbytes_per_sec": 0 00:04:16.589 }, 00:04:16.589 "claimed": false, 00:04:16.589 "zoned": false, 00:04:16.589 "supported_io_types": { 00:04:16.589 "read": true, 00:04:16.589 "write": true, 00:04:16.589 "unmap": true, 00:04:16.589 "flush": true, 00:04:16.589 "reset": true, 00:04:16.589 "nvme_admin": false, 00:04:16.589 "nvme_io": false, 00:04:16.589 "nvme_io_md": false, 00:04:16.589 "write_zeroes": true, 00:04:16.589 "zcopy": true, 00:04:16.589 "get_zone_info": false, 00:04:16.589 "zone_management": false, 00:04:16.589 "zone_append": false, 00:04:16.589 "compare": false, 00:04:16.589 "compare_and_write": false, 00:04:16.589 "abort": true, 00:04:16.589 "seek_hole": false, 00:04:16.589 "seek_data": false, 00:04:16.589 "copy": true, 00:04:16.589 "nvme_iov_md": false 00:04:16.589 }, 00:04:16.589 "memory_domains": [ 00:04:16.589 { 00:04:16.589 "dma_device_id": "system", 00:04:16.589 "dma_device_type": 1 00:04:16.589 }, 00:04:16.589 { 00:04:16.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.589 "dma_device_type": 2 00:04:16.589 } 00:04:16.589 ], 00:04:16.589 "driver_specific": { 00:04:16.589 "passthru": { 00:04:16.589 "name": "Passthru0", 00:04:16.589 "base_bdev_name": "Malloc0" 00:04:16.589 } 00:04:16.589 } 00:04:16.589 } 00:04:16.589 ]' 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.589 09:36:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.589 00:04:16.589 real 0m0.248s 00:04:16.589 user 0m0.134s 00:04:16.589 sys 0m0.030s 00:04:16.589 ************************************ 00:04:16.589 END TEST rpc_integrity 00:04:16.589 ************************************ 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.589 09:36:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.589 09:36:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.589 09:36:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 ************************************ 00:04:16.589 START TEST rpc_plugins 00:04:16.589 ************************************ 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.589 { 00:04:16.589 "name": "Malloc1", 00:04:16.589 "aliases": [ 00:04:16.589 "0db301be-6f25-4198-838d-84eb2435655a" 00:04:16.589 ], 00:04:16.589 "product_name": "Malloc disk", 00:04:16.589 "block_size": 4096, 00:04:16.589 "num_blocks": 256, 00:04:16.589 "uuid": "0db301be-6f25-4198-838d-84eb2435655a", 00:04:16.589 "assigned_rate_limits": { 00:04:16.589 "rw_ios_per_sec": 0, 00:04:16.589 "rw_mbytes_per_sec": 0, 00:04:16.589 "r_mbytes_per_sec": 0, 00:04:16.589 "w_mbytes_per_sec": 0 00:04:16.589 }, 00:04:16.589 "claimed": false, 00:04:16.589 "zoned": false, 00:04:16.589 "supported_io_types": { 00:04:16.589 "read": true, 00:04:16.589 "write": true, 00:04:16.589 "unmap": true, 00:04:16.589 "flush": true, 00:04:16.589 "reset": true, 00:04:16.589 "nvme_admin": false, 00:04:16.589 "nvme_io": false, 00:04:16.589 "nvme_io_md": false, 00:04:16.589 "write_zeroes": true, 00:04:16.589 "zcopy": true, 00:04:16.589 "get_zone_info": false, 00:04:16.589 "zone_management": false, 00:04:16.589 "zone_append": false, 00:04:16.589 "compare": false, 00:04:16.589 "compare_and_write": false, 00:04:16.589 "abort": true, 00:04:16.589 "seek_hole": false, 00:04:16.589 "seek_data": false, 00:04:16.589 "copy": true, 00:04:16.589 "nvme_iov_md": false 00:04:16.589 }, 00:04:16.589 "memory_domains": [ 00:04:16.589 { 00:04:16.589 "dma_device_id": "system", 00:04:16.589 "dma_device_type": 1 00:04:16.589 }, 00:04:16.589 { 00:04:16.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.589 "dma_device_type": 2 00:04:16.589 } 00:04:16.589 ], 00:04:16.589 "driver_specific": {} 00:04:16.589 } 00:04:16.589 ]' 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.589 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.589 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.849 09:36:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.849 00:04:16.849 real 0m0.116s 00:04:16.849 user 0m0.065s 00:04:16.849 sys 0m0.017s 00:04:16.849 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.849 ************************************ 00:04:16.849 END TEST rpc_plugins 00:04:16.849 ************************************ 00:04:16.849 09:36:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.849 09:36:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.849 09:36:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.849 09:36:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.849 09:36:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.849 ************************************ 00:04:16.849 START TEST rpc_trace_cmd_test 00:04:16.849 ************************************ 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.849 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57124", 00:04:16.849 "tpoint_group_mask": "0x8", 00:04:16.849 "iscsi_conn": { 00:04:16.849 "mask": "0x2", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "scsi": { 00:04:16.849 "mask": "0x4", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "bdev": { 00:04:16.849 "mask": "0x8", 00:04:16.849 "tpoint_mask": "0xffffffffffffffff" 00:04:16.849 }, 00:04:16.849 "nvmf_rdma": { 00:04:16.849 "mask": "0x10", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "nvmf_tcp": { 00:04:16.849 "mask": "0x20", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "ftl": { 00:04:16.849 "mask": "0x40", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "blobfs": { 00:04:16.849 "mask": "0x80", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "dsa": { 00:04:16.849 "mask": "0x200", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "thread": { 00:04:16.849 "mask": "0x400", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "nvme_pcie": { 00:04:16.849 "mask": "0x800", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "iaa": { 00:04:16.849 "mask": "0x1000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "nvme_tcp": { 00:04:16.849 "mask": "0x2000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "bdev_nvme": { 00:04:16.849 "mask": "0x4000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "sock": { 00:04:16.849 "mask": "0x8000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "blob": { 00:04:16.849 "mask": "0x10000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "bdev_raid": { 00:04:16.849 "mask": "0x20000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 }, 00:04:16.849 "scheduler": { 00:04:16.849 "mask": "0x40000", 00:04:16.849 "tpoint_mask": "0x0" 00:04:16.849 } 00:04:16.849 }' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.849 00:04:16.849 real 0m0.171s 00:04:16.849 user 0m0.138s 00:04:16.849 sys 0m0.025s 00:04:16.849 ************************************ 00:04:16.849 END TEST rpc_trace_cmd_test 00:04:16.849 ************************************ 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.849 09:36:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 09:36:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.111 09:36:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.111 09:36:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.111 09:36:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.111 09:36:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.111 09:36:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 ************************************ 00:04:17.111 START TEST rpc_daemon_integrity 00:04:17.111 ************************************ 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.111 { 00:04:17.111 "name": "Malloc2", 00:04:17.111 "aliases": [ 00:04:17.111 "e33473ff-6440-468d-af39-ead5ccfdb5c6" 00:04:17.111 ], 00:04:17.111 "product_name": "Malloc disk", 00:04:17.111 "block_size": 512, 00:04:17.111 "num_blocks": 16384, 00:04:17.111 "uuid": "e33473ff-6440-468d-af39-ead5ccfdb5c6", 00:04:17.111 "assigned_rate_limits": { 00:04:17.111 "rw_ios_per_sec": 0, 00:04:17.111 "rw_mbytes_per_sec": 0, 00:04:17.111 "r_mbytes_per_sec": 0, 00:04:17.111 "w_mbytes_per_sec": 0 00:04:17.111 }, 00:04:17.111 "claimed": false, 00:04:17.111 "zoned": false, 00:04:17.111 "supported_io_types": { 00:04:17.111 "read": true, 00:04:17.111 "write": true, 00:04:17.111 "unmap": true, 00:04:17.111 "flush": true, 00:04:17.111 "reset": true, 00:04:17.111 "nvme_admin": false, 00:04:17.111 "nvme_io": false, 00:04:17.111 "nvme_io_md": false, 00:04:17.111 "write_zeroes": true, 00:04:17.111 "zcopy": true, 00:04:17.111 "get_zone_info": false, 00:04:17.111 "zone_management": false, 00:04:17.111 "zone_append": false, 00:04:17.111 "compare": false, 00:04:17.111 "compare_and_write": false, 00:04:17.111 "abort": true, 00:04:17.111 "seek_hole": false, 00:04:17.111 "seek_data": false, 00:04:17.111 "copy": true, 00:04:17.111 "nvme_iov_md": false 00:04:17.111 }, 00:04:17.111 "memory_domains": [ 00:04:17.111 { 00:04:17.111 "dma_device_id": "system", 00:04:17.111 "dma_device_type": 1 00:04:17.111 }, 00:04:17.111 { 00:04:17.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.111 "dma_device_type": 2 00:04:17.111 } 00:04:17.111 ], 00:04:17.111 "driver_specific": {} 00:04:17.111 } 00:04:17.111 ]' 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 [2024-11-28 09:36:55.892781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.111 [2024-11-28 09:36:55.892855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.111 [2024-11-28 09:36:55.892877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:17.111 [2024-11-28 09:36:55.892888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.111 [2024-11-28 09:36:55.895293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.111 [2024-11-28 09:36:55.895351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.111 Passthru0 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.111 { 00:04:17.111 "name": "Malloc2", 00:04:17.111 "aliases": [ 00:04:17.111 "e33473ff-6440-468d-af39-ead5ccfdb5c6" 00:04:17.111 ], 00:04:17.111 "product_name": "Malloc disk", 00:04:17.111 "block_size": 512, 00:04:17.111 "num_blocks": 16384, 00:04:17.111 "uuid": "e33473ff-6440-468d-af39-ead5ccfdb5c6", 00:04:17.111 "assigned_rate_limits": { 00:04:17.111 "rw_ios_per_sec": 0, 00:04:17.111 "rw_mbytes_per_sec": 0, 00:04:17.111 "r_mbytes_per_sec": 0, 00:04:17.111 "w_mbytes_per_sec": 0 00:04:17.111 }, 00:04:17.111 "claimed": true, 00:04:17.111 "claim_type": "exclusive_write", 00:04:17.111 "zoned": false, 00:04:17.111 "supported_io_types": { 00:04:17.111 "read": true, 00:04:17.111 "write": true, 00:04:17.111 "unmap": true, 00:04:17.111 "flush": true, 00:04:17.111 "reset": true, 00:04:17.111 "nvme_admin": false, 00:04:17.111 "nvme_io": false, 00:04:17.111 "nvme_io_md": false, 00:04:17.111 "write_zeroes": true, 00:04:17.111 "zcopy": true, 00:04:17.111 "get_zone_info": false, 00:04:17.111 "zone_management": false, 00:04:17.111 "zone_append": false, 00:04:17.111 "compare": false, 00:04:17.111 "compare_and_write": false, 00:04:17.111 "abort": true, 00:04:17.111 "seek_hole": false, 00:04:17.111 "seek_data": false, 00:04:17.111 "copy": true, 00:04:17.111 "nvme_iov_md": false 00:04:17.111 }, 00:04:17.111 "memory_domains": [ 00:04:17.111 { 00:04:17.111 "dma_device_id": "system", 00:04:17.111 "dma_device_type": 1 00:04:17.111 }, 00:04:17.111 { 00:04:17.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.111 "dma_device_type": 2 00:04:17.111 } 00:04:17.111 ], 00:04:17.111 "driver_specific": {} 00:04:17.111 }, 00:04:17.111 { 00:04:17.111 "name": "Passthru0", 00:04:17.111 "aliases": [ 00:04:17.111 "0532a90b-7207-583d-9732-821ca8aada2e" 00:04:17.111 ], 00:04:17.111 "product_name": "passthru", 00:04:17.111 "block_size": 512, 00:04:17.111 "num_blocks": 16384, 00:04:17.111 "uuid": "0532a90b-7207-583d-9732-821ca8aada2e", 00:04:17.111 "assigned_rate_limits": { 00:04:17.111 "rw_ios_per_sec": 0, 00:04:17.111 "rw_mbytes_per_sec": 0, 00:04:17.111 "r_mbytes_per_sec": 0, 00:04:17.111 "w_mbytes_per_sec": 0 00:04:17.111 }, 00:04:17.111 "claimed": false, 00:04:17.111 "zoned": false, 00:04:17.111 "supported_io_types": { 00:04:17.111 "read": true, 00:04:17.111 "write": true, 00:04:17.111 "unmap": true, 00:04:17.111 "flush": true, 00:04:17.111 "reset": true, 00:04:17.111 "nvme_admin": false, 00:04:17.111 "nvme_io": false, 00:04:17.111 "nvme_io_md": false, 00:04:17.111 "write_zeroes": true, 00:04:17.111 "zcopy": true, 00:04:17.111 "get_zone_info": false, 00:04:17.111 "zone_management": false, 00:04:17.111 "zone_append": false, 00:04:17.111 "compare": false, 00:04:17.111 "compare_and_write": false, 00:04:17.111 "abort": true, 00:04:17.111 "seek_hole": false, 00:04:17.111 "seek_data": false, 00:04:17.111 "copy": true, 00:04:17.111 "nvme_iov_md": false 00:04:17.111 }, 00:04:17.111 "memory_domains": [ 00:04:17.111 { 00:04:17.111 "dma_device_id": "system", 00:04:17.111 "dma_device_type": 1 00:04:17.111 }, 00:04:17.111 { 00:04:17.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.111 "dma_device_type": 2 00:04:17.111 } 00:04:17.111 ], 00:04:17.111 "driver_specific": { 00:04:17.111 "passthru": { 00:04:17.111 "name": "Passthru0", 00:04:17.111 "base_bdev_name": "Malloc2" 00:04:17.111 } 00:04:17.111 } 00:04:17.111 } 00:04:17.111 ]' 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.111 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.112 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.372 09:36:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.372 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.372 09:36:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.372 09:36:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.372 00:04:17.372 real 0m0.243s 00:04:17.372 user 0m0.131s 00:04:17.372 sys 0m0.031s 00:04:17.372 ************************************ 00:04:17.372 END TEST rpc_daemon_integrity 00:04:17.372 ************************************ 00:04:17.372 09:36:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.372 09:36:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.372 09:36:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.372 09:36:56 rpc -- rpc/rpc.sh@84 -- # killprocess 57124 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@954 -- # '[' -z 57124 ']' 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@958 -- # kill -0 57124 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57124 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.372 killing process with pid 57124 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57124' 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@973 -- # kill 57124 00:04:17.372 09:36:56 rpc -- common/autotest_common.sh@978 -- # wait 57124 00:04:18.756 00:04:18.756 real 0m3.622s 00:04:18.756 user 0m4.003s 00:04:18.756 sys 0m0.654s 00:04:18.756 ************************************ 00:04:18.756 END TEST rpc 00:04:18.756 ************************************ 00:04:18.756 09:36:57 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.757 09:36:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.757 09:36:57 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.757 09:36:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.757 09:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.757 09:36:57 -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 ************************************ 00:04:19.017 START TEST skip_rpc 00:04:19.017 ************************************ 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.017 * Looking for test storage... 00:04:19.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.017 09:36:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.017 --rc genhtml_branch_coverage=1 00:04:19.017 --rc genhtml_function_coverage=1 00:04:19.017 --rc genhtml_legend=1 00:04:19.017 --rc geninfo_all_blocks=1 00:04:19.017 --rc geninfo_unexecuted_blocks=1 00:04:19.017 00:04:19.017 ' 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.017 --rc genhtml_branch_coverage=1 00:04:19.017 --rc genhtml_function_coverage=1 00:04:19.017 --rc genhtml_legend=1 00:04:19.017 --rc geninfo_all_blocks=1 00:04:19.017 --rc geninfo_unexecuted_blocks=1 00:04:19.017 00:04:19.017 ' 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.017 --rc genhtml_branch_coverage=1 00:04:19.017 --rc genhtml_function_coverage=1 00:04:19.017 --rc genhtml_legend=1 00:04:19.017 --rc geninfo_all_blocks=1 00:04:19.017 --rc geninfo_unexecuted_blocks=1 00:04:19.017 00:04:19.017 ' 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.017 --rc genhtml_branch_coverage=1 00:04:19.017 --rc genhtml_function_coverage=1 00:04:19.017 --rc genhtml_legend=1 00:04:19.017 --rc geninfo_all_blocks=1 00:04:19.017 --rc geninfo_unexecuted_blocks=1 00:04:19.017 00:04:19.017 ' 00:04:19.017 09:36:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.017 09:36:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.017 09:36:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.017 09:36:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.017 ************************************ 00:04:19.017 START TEST skip_rpc 00:04:19.017 ************************************ 00:04:19.017 09:36:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:19.017 09:36:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57336 00:04:19.017 09:36:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.017 09:36:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.017 09:36:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.017 [2024-11-28 09:36:57.890752] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:19.017 [2024-11-28 09:36:57.890895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57336 ] 00:04:19.277 [2024-11-28 09:36:58.049689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.277 [2024-11-28 09:36:58.135339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57336 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57336 ']' 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57336 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57336 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.549 killing process with pid 57336 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57336' 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57336 00:04:24.549 09:37:02 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57336 00:04:25.484 00:04:25.484 real 0m6.220s 00:04:25.484 user 0m5.855s 00:04:25.484 sys 0m0.263s 00:04:25.484 ************************************ 00:04:25.484 END TEST skip_rpc 00:04:25.484 ************************************ 00:04:25.484 09:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.484 09:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.484 09:37:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:25.484 09:37:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.484 09:37:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.484 09:37:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.484 ************************************ 00:04:25.484 START TEST skip_rpc_with_json 00:04:25.484 ************************************ 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57435 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57435 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57435 ']' 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.484 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.484 [2024-11-28 09:37:04.158260] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:25.484 [2024-11-28 09:37:04.158375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57435 ] 00:04:25.484 [2024-11-28 09:37:04.312464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.744 [2024-11-28 09:37:04.387218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.309 [2024-11-28 09:37:04.944232] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:26.309 request: 00:04:26.309 { 00:04:26.309 "trtype": "tcp", 00:04:26.309 "method": "nvmf_get_transports", 00:04:26.309 "req_id": 1 00:04:26.309 } 00:04:26.309 Got JSON-RPC error response 00:04:26.309 response: 00:04:26.309 { 00:04:26.309 "code": -19, 00:04:26.309 "message": "No such device" 00:04:26.309 } 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.309 [2024-11-28 09:37:04.956307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.309 09:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.309 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.309 09:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.309 { 00:04:26.309 "subsystems": [ 00:04:26.309 { 00:04:26.309 "subsystem": "fsdev", 00:04:26.309 "config": [ 00:04:26.309 { 00:04:26.309 "method": "fsdev_set_opts", 00:04:26.309 "params": { 00:04:26.309 "fsdev_io_pool_size": 65535, 00:04:26.309 "fsdev_io_cache_size": 256 00:04:26.309 } 00:04:26.309 } 00:04:26.309 ] 00:04:26.309 }, 00:04:26.309 { 00:04:26.309 "subsystem": "keyring", 00:04:26.309 "config": [] 00:04:26.309 }, 00:04:26.309 { 00:04:26.309 "subsystem": "iobuf", 00:04:26.309 "config": [ 00:04:26.309 { 00:04:26.309 "method": "iobuf_set_options", 00:04:26.309 "params": { 00:04:26.309 "small_pool_count": 8192, 00:04:26.309 "large_pool_count": 1024, 00:04:26.309 "small_bufsize": 8192, 00:04:26.309 "large_bufsize": 135168, 00:04:26.309 "enable_numa": false 00:04:26.309 } 00:04:26.309 } 00:04:26.309 ] 00:04:26.309 }, 00:04:26.309 { 00:04:26.309 "subsystem": "sock", 00:04:26.309 "config": [ 00:04:26.309 { 00:04:26.309 "method": "sock_set_default_impl", 00:04:26.309 "params": { 00:04:26.309 "impl_name": "posix" 00:04:26.309 } 00:04:26.309 }, 00:04:26.309 { 00:04:26.309 "method": "sock_impl_set_options", 00:04:26.309 "params": { 00:04:26.309 "impl_name": "ssl", 00:04:26.309 "recv_buf_size": 4096, 00:04:26.309 "send_buf_size": 4096, 00:04:26.309 "enable_recv_pipe": true, 00:04:26.309 "enable_quickack": false, 00:04:26.309 "enable_placement_id": 0, 00:04:26.309 "enable_zerocopy_send_server": true, 00:04:26.309 "enable_zerocopy_send_client": false, 00:04:26.309 "zerocopy_threshold": 0, 00:04:26.309 "tls_version": 0, 00:04:26.309 "enable_ktls": false 00:04:26.309 } 00:04:26.309 }, 00:04:26.309 { 00:04:26.309 "method": "sock_impl_set_options", 00:04:26.309 "params": { 00:04:26.309 "impl_name": "posix", 00:04:26.309 "recv_buf_size": 2097152, 00:04:26.309 "send_buf_size": 2097152, 00:04:26.309 "enable_recv_pipe": true, 00:04:26.309 "enable_quickack": false, 00:04:26.309 "enable_placement_id": 0, 00:04:26.309 "enable_zerocopy_send_server": true, 00:04:26.309 "enable_zerocopy_send_client": false, 00:04:26.309 "zerocopy_threshold": 0, 00:04:26.309 "tls_version": 0, 00:04:26.309 "enable_ktls": false 00:04:26.309 } 00:04:26.309 } 00:04:26.309 ] 00:04:26.309 }, 00:04:26.309 { 00:04:26.310 "subsystem": "vmd", 00:04:26.310 "config": [] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "accel", 00:04:26.310 "config": [ 00:04:26.310 { 00:04:26.310 "method": "accel_set_options", 00:04:26.310 "params": { 00:04:26.310 "small_cache_size": 128, 00:04:26.310 "large_cache_size": 16, 00:04:26.310 "task_count": 2048, 00:04:26.310 "sequence_count": 2048, 00:04:26.310 "buf_count": 2048 00:04:26.310 } 00:04:26.310 } 00:04:26.310 ] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "bdev", 00:04:26.310 "config": [ 00:04:26.310 { 00:04:26.310 "method": "bdev_set_options", 00:04:26.310 "params": { 00:04:26.310 "bdev_io_pool_size": 65535, 00:04:26.310 "bdev_io_cache_size": 256, 00:04:26.310 "bdev_auto_examine": true, 00:04:26.310 "iobuf_small_cache_size": 128, 00:04:26.310 "iobuf_large_cache_size": 16 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "bdev_raid_set_options", 00:04:26.310 "params": { 00:04:26.310 "process_window_size_kb": 1024, 00:04:26.310 "process_max_bandwidth_mb_sec": 0 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "bdev_iscsi_set_options", 00:04:26.310 "params": { 00:04:26.310 "timeout_sec": 30 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "bdev_nvme_set_options", 00:04:26.310 "params": { 00:04:26.310 "action_on_timeout": "none", 00:04:26.310 "timeout_us": 0, 00:04:26.310 "timeout_admin_us": 0, 00:04:26.310 "keep_alive_timeout_ms": 10000, 00:04:26.310 "arbitration_burst": 0, 00:04:26.310 "low_priority_weight": 0, 00:04:26.310 "medium_priority_weight": 0, 00:04:26.310 "high_priority_weight": 0, 00:04:26.310 "nvme_adminq_poll_period_us": 10000, 00:04:26.310 "nvme_ioq_poll_period_us": 0, 00:04:26.310 "io_queue_requests": 0, 00:04:26.310 "delay_cmd_submit": true, 00:04:26.310 "transport_retry_count": 4, 00:04:26.310 "bdev_retry_count": 3, 00:04:26.310 "transport_ack_timeout": 0, 00:04:26.310 "ctrlr_loss_timeout_sec": 0, 00:04:26.310 "reconnect_delay_sec": 0, 00:04:26.310 "fast_io_fail_timeout_sec": 0, 00:04:26.310 "disable_auto_failback": false, 00:04:26.310 "generate_uuids": false, 00:04:26.310 "transport_tos": 0, 00:04:26.310 "nvme_error_stat": false, 00:04:26.310 "rdma_srq_size": 0, 00:04:26.310 "io_path_stat": false, 00:04:26.310 "allow_accel_sequence": false, 00:04:26.310 "rdma_max_cq_size": 0, 00:04:26.310 "rdma_cm_event_timeout_ms": 0, 00:04:26.310 "dhchap_digests": [ 00:04:26.310 "sha256", 00:04:26.310 "sha384", 00:04:26.310 "sha512" 00:04:26.310 ], 00:04:26.310 "dhchap_dhgroups": [ 00:04:26.310 "null", 00:04:26.310 "ffdhe2048", 00:04:26.310 "ffdhe3072", 00:04:26.310 "ffdhe4096", 00:04:26.310 "ffdhe6144", 00:04:26.310 "ffdhe8192" 00:04:26.310 ] 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "bdev_nvme_set_hotplug", 00:04:26.310 "params": { 00:04:26.310 "period_us": 100000, 00:04:26.310 "enable": false 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "bdev_wait_for_examine" 00:04:26.310 } 00:04:26.310 ] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "scsi", 00:04:26.310 "config": null 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "scheduler", 00:04:26.310 "config": [ 00:04:26.310 { 00:04:26.310 "method": "framework_set_scheduler", 00:04:26.310 "params": { 00:04:26.310 "name": "static" 00:04:26.310 } 00:04:26.310 } 00:04:26.310 ] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "vhost_scsi", 00:04:26.310 "config": [] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "vhost_blk", 00:04:26.310 "config": [] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "ublk", 00:04:26.310 "config": [] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "nbd", 00:04:26.310 "config": [] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "nvmf", 00:04:26.310 "config": [ 00:04:26.310 { 00:04:26.310 "method": "nvmf_set_config", 00:04:26.310 "params": { 00:04:26.310 "discovery_filter": "match_any", 00:04:26.310 "admin_cmd_passthru": { 00:04:26.310 "identify_ctrlr": false 00:04:26.310 }, 00:04:26.310 "dhchap_digests": [ 00:04:26.310 "sha256", 00:04:26.310 "sha384", 00:04:26.310 "sha512" 00:04:26.310 ], 00:04:26.310 "dhchap_dhgroups": [ 00:04:26.310 "null", 00:04:26.310 "ffdhe2048", 00:04:26.310 "ffdhe3072", 00:04:26.310 "ffdhe4096", 00:04:26.310 "ffdhe6144", 00:04:26.310 "ffdhe8192" 00:04:26.310 ] 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "nvmf_set_max_subsystems", 00:04:26.310 "params": { 00:04:26.310 "max_subsystems": 1024 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "nvmf_set_crdt", 00:04:26.310 "params": { 00:04:26.310 "crdt1": 0, 00:04:26.310 "crdt2": 0, 00:04:26.310 "crdt3": 0 00:04:26.310 } 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "method": "nvmf_create_transport", 00:04:26.310 "params": { 00:04:26.310 "trtype": "TCP", 00:04:26.310 "max_queue_depth": 128, 00:04:26.310 "max_io_qpairs_per_ctrlr": 127, 00:04:26.310 "in_capsule_data_size": 4096, 00:04:26.310 "max_io_size": 131072, 00:04:26.310 "io_unit_size": 131072, 00:04:26.310 "max_aq_depth": 128, 00:04:26.310 "num_shared_buffers": 511, 00:04:26.310 "buf_cache_size": 4294967295, 00:04:26.310 "dif_insert_or_strip": false, 00:04:26.310 "zcopy": false, 00:04:26.310 "c2h_success": true, 00:04:26.310 "sock_priority": 0, 00:04:26.310 "abort_timeout_sec": 1, 00:04:26.310 "ack_timeout": 0, 00:04:26.310 "data_wr_pool_size": 0 00:04:26.310 } 00:04:26.310 } 00:04:26.310 ] 00:04:26.310 }, 00:04:26.310 { 00:04:26.310 "subsystem": "iscsi", 00:04:26.310 "config": [ 00:04:26.310 { 00:04:26.310 "method": "iscsi_set_options", 00:04:26.310 "params": { 00:04:26.310 "node_base": "iqn.2016-06.io.spdk", 00:04:26.310 "max_sessions": 128, 00:04:26.310 "max_connections_per_session": 2, 00:04:26.310 "max_queue_depth": 64, 00:04:26.310 "default_time2wait": 2, 00:04:26.310 "default_time2retain": 20, 00:04:26.310 "first_burst_length": 8192, 00:04:26.310 "immediate_data": true, 00:04:26.310 "allow_duplicated_isid": false, 00:04:26.310 "error_recovery_level": 0, 00:04:26.310 "nop_timeout": 60, 00:04:26.310 "nop_in_interval": 30, 00:04:26.310 "disable_chap": false, 00:04:26.310 "require_chap": false, 00:04:26.310 "mutual_chap": false, 00:04:26.310 "chap_group": 0, 00:04:26.310 "max_large_datain_per_connection": 64, 00:04:26.310 "max_r2t_per_connection": 4, 00:04:26.310 "pdu_pool_size": 36864, 00:04:26.310 "immediate_data_pool_size": 16384, 00:04:26.310 "data_out_pool_size": 2048 00:04:26.310 } 00:04:26.310 } 00:04:26.310 ] 00:04:26.310 } 00:04:26.310 ] 00:04:26.310 } 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57435 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57435 ']' 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57435 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57435 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.310 killing process with pid 57435 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57435' 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57435 00:04:26.310 09:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57435 00:04:27.684 09:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57469 00:04:27.684 09:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.684 09:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.941 09:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57469 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57469 ']' 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57469 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57469 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.942 killing process with pid 57469 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57469' 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57469 00:04:32.942 09:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57469 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.878 00:04:33.878 real 0m8.419s 00:04:33.878 user 0m8.027s 00:04:33.878 sys 0m0.573s 00:04:33.878 ************************************ 00:04:33.878 END TEST skip_rpc_with_json 00:04:33.878 ************************************ 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.878 09:37:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:33.878 09:37:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.878 09:37:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.878 09:37:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.878 ************************************ 00:04:33.878 START TEST skip_rpc_with_delay 00:04:33.878 ************************************ 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:33.878 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.879 [2024-11-28 09:37:12.643778] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.879 ************************************ 00:04:33.879 END TEST skip_rpc_with_delay 00:04:33.879 ************************************ 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.879 00:04:33.879 real 0m0.134s 00:04:33.879 user 0m0.076s 00:04:33.879 sys 0m0.057s 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.879 09:37:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:33.879 09:37:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:33.879 09:37:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:33.879 09:37:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:33.879 09:37:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.879 09:37:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.879 09:37:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.138 ************************************ 00:04:34.138 START TEST exit_on_failed_rpc_init 00:04:34.138 ************************************ 00:04:34.138 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:34.138 09:37:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57591 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57591 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57591 ']' 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.139 09:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.139 [2024-11-28 09:37:12.849913] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:34.139 [2024-11-28 09:37:12.850066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57591 ] 00:04:34.139 [2024-11-28 09:37:13.010356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.397 [2024-11-28 09:37:13.097554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.964 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.964 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:34.964 09:37:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.964 09:37:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.964 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.965 09:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.965 [2024-11-28 09:37:13.763315] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:34.965 [2024-11-28 09:37:13.763428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57604 ] 00:04:35.223 [2024-11-28 09:37:13.921833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.223 [2024-11-28 09:37:14.013398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.223 [2024-11-28 09:37:14.013467] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:35.223 [2024-11-28 09:37:14.013480] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:35.223 [2024-11-28 09:37:14.013492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:35.481 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57591 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57591 ']' 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57591 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57591 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.482 killing process with pid 57591 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57591' 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57591 00:04:35.482 09:37:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57591 00:04:36.866 00:04:36.866 real 0m2.624s 00:04:36.866 user 0m2.923s 00:04:36.866 sys 0m0.410s 00:04:36.866 ************************************ 00:04:36.866 END TEST exit_on_failed_rpc_init 00:04:36.866 ************************************ 00:04:36.866 09:37:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.866 09:37:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.866 09:37:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.866 00:04:36.866 real 0m17.794s 00:04:36.866 user 0m17.036s 00:04:36.866 sys 0m1.476s 00:04:36.866 ************************************ 00:04:36.866 END TEST skip_rpc 00:04:36.866 ************************************ 00:04:36.866 09:37:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.866 09:37:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.866 09:37:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.866 09:37:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.866 09:37:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.866 09:37:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.866 ************************************ 00:04:36.866 START TEST rpc_client 00:04:36.866 ************************************ 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.866 * Looking for test storage... 00:04:36.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.866 09:37:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.866 --rc genhtml_branch_coverage=1 00:04:36.866 --rc genhtml_function_coverage=1 00:04:36.866 --rc genhtml_legend=1 00:04:36.866 --rc geninfo_all_blocks=1 00:04:36.866 --rc geninfo_unexecuted_blocks=1 00:04:36.866 00:04:36.866 ' 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.866 --rc genhtml_branch_coverage=1 00:04:36.866 --rc genhtml_function_coverage=1 00:04:36.866 --rc genhtml_legend=1 00:04:36.866 --rc geninfo_all_blocks=1 00:04:36.866 --rc geninfo_unexecuted_blocks=1 00:04:36.866 00:04:36.866 ' 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.866 --rc genhtml_branch_coverage=1 00:04:36.866 --rc genhtml_function_coverage=1 00:04:36.866 --rc genhtml_legend=1 00:04:36.866 --rc geninfo_all_blocks=1 00:04:36.866 --rc geninfo_unexecuted_blocks=1 00:04:36.866 00:04:36.866 ' 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.866 --rc genhtml_branch_coverage=1 00:04:36.866 --rc genhtml_function_coverage=1 00:04:36.866 --rc genhtml_legend=1 00:04:36.866 --rc geninfo_all_blocks=1 00:04:36.866 --rc geninfo_unexecuted_blocks=1 00:04:36.866 00:04:36.866 ' 00:04:36.866 09:37:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:36.866 OK 00:04:36.866 09:37:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.866 00:04:36.866 real 0m0.199s 00:04:36.866 user 0m0.112s 00:04:36.866 sys 0m0.091s 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.866 09:37:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:36.866 ************************************ 00:04:36.866 END TEST rpc_client 00:04:36.866 ************************************ 00:04:36.866 09:37:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.866 09:37:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.866 09:37:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.866 09:37:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.866 ************************************ 00:04:36.866 START TEST json_config 00:04:36.866 ************************************ 00:04:36.866 09:37:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.127 09:37:15 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.127 09:37:15 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.127 09:37:15 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.127 09:37:15 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.127 09:37:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.127 09:37:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.127 09:37:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.127 09:37:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.127 09:37:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.127 09:37:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.127 09:37:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.127 09:37:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.127 09:37:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.127 09:37:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.128 09:37:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.128 09:37:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:37.128 09:37:15 json_config -- scripts/common.sh@345 -- # : 1 00:04:37.128 09:37:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.128 09:37:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.128 09:37:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:37.128 09:37:15 json_config -- scripts/common.sh@353 -- # local d=1 00:04:37.128 09:37:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.128 09:37:15 json_config -- scripts/common.sh@355 -- # echo 1 00:04:37.128 09:37:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.128 09:37:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:37.128 09:37:15 json_config -- scripts/common.sh@353 -- # local d=2 00:04:37.128 09:37:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.128 09:37:15 json_config -- scripts/common.sh@355 -- # echo 2 00:04:37.128 09:37:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.128 09:37:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.128 09:37:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.128 09:37:15 json_config -- scripts/common.sh@368 -- # return 0 00:04:37.128 09:37:15 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.128 09:37:15 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.128 --rc genhtml_branch_coverage=1 00:04:37.128 --rc genhtml_function_coverage=1 00:04:37.128 --rc genhtml_legend=1 00:04:37.128 --rc geninfo_all_blocks=1 00:04:37.128 --rc geninfo_unexecuted_blocks=1 00:04:37.128 00:04:37.128 ' 00:04:37.128 09:37:15 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.128 --rc genhtml_branch_coverage=1 00:04:37.128 --rc genhtml_function_coverage=1 00:04:37.128 --rc genhtml_legend=1 00:04:37.128 --rc geninfo_all_blocks=1 00:04:37.128 --rc geninfo_unexecuted_blocks=1 00:04:37.128 00:04:37.128 ' 00:04:37.128 09:37:15 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.128 --rc genhtml_branch_coverage=1 00:04:37.128 --rc genhtml_function_coverage=1 00:04:37.128 --rc genhtml_legend=1 00:04:37.128 --rc geninfo_all_blocks=1 00:04:37.128 --rc geninfo_unexecuted_blocks=1 00:04:37.128 00:04:37.128 ' 00:04:37.128 09:37:15 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.128 --rc genhtml_branch_coverage=1 00:04:37.128 --rc genhtml_function_coverage=1 00:04:37.128 --rc genhtml_legend=1 00:04:37.128 --rc geninfo_all_blocks=1 00:04:37.128 --rc geninfo_unexecuted_blocks=1 00:04:37.128 00:04:37.128 ' 00:04:37.128 09:37:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1aad585-0614-4c54-866f-fbb91759721c 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b1aad585-0614-4c54-866f-fbb91759721c 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.128 09:37:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.128 09:37:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.128 09:37:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.128 09:37:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.128 09:37:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.128 09:37:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.128 09:37:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.128 09:37:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:37.128 09:37:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@51 -- # : 0 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.128 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.128 09:37:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.128 09:37:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:37.128 09:37:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:37.129 09:37:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:37.129 09:37:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:37.129 09:37:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.129 WARNING: No tests are enabled so not running JSON configuration tests 00:04:37.129 09:37:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:37.129 09:37:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:37.129 00:04:37.129 real 0m0.133s 00:04:37.129 user 0m0.085s 00:04:37.129 sys 0m0.051s 00:04:37.129 09:37:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.129 ************************************ 00:04:37.129 END TEST json_config 00:04:37.129 ************************************ 00:04:37.129 09:37:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.129 09:37:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:37.129 09:37:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.129 09:37:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.129 09:37:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.129 ************************************ 00:04:37.129 START TEST json_config_extra_key 00:04:37.129 ************************************ 00:04:37.129 09:37:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:37.129 09:37:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.129 09:37:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.129 09:37:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.390 09:37:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.390 09:37:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:37.391 09:37:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.391 09:37:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.391 --rc genhtml_branch_coverage=1 00:04:37.391 --rc genhtml_function_coverage=1 00:04:37.391 --rc genhtml_legend=1 00:04:37.391 --rc geninfo_all_blocks=1 00:04:37.391 --rc geninfo_unexecuted_blocks=1 00:04:37.391 00:04:37.391 ' 00:04:37.391 09:37:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.391 --rc genhtml_branch_coverage=1 00:04:37.391 --rc genhtml_function_coverage=1 00:04:37.391 --rc genhtml_legend=1 00:04:37.391 --rc geninfo_all_blocks=1 00:04:37.391 --rc geninfo_unexecuted_blocks=1 00:04:37.391 00:04:37.391 ' 00:04:37.391 09:37:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.391 --rc genhtml_branch_coverage=1 00:04:37.391 --rc genhtml_function_coverage=1 00:04:37.391 --rc genhtml_legend=1 00:04:37.391 --rc geninfo_all_blocks=1 00:04:37.391 --rc geninfo_unexecuted_blocks=1 00:04:37.391 00:04:37.391 ' 00:04:37.391 09:37:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.391 --rc genhtml_branch_coverage=1 00:04:37.391 --rc genhtml_function_coverage=1 00:04:37.391 --rc genhtml_legend=1 00:04:37.391 --rc geninfo_all_blocks=1 00:04:37.391 --rc geninfo_unexecuted_blocks=1 00:04:37.391 00:04:37.391 ' 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b1aad585-0614-4c54-866f-fbb91759721c 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b1aad585-0614-4c54-866f-fbb91759721c 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.391 09:37:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.391 09:37:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.391 09:37:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.391 09:37:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.391 09:37:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:37.391 09:37:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.391 09:37:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:37.391 INFO: launching applications... 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:37.391 09:37:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.391 09:37:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57797 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.392 Waiting for target to run... 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57797 /var/tmp/spdk_tgt.sock 00:04:37.392 09:37:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57797 ']' 00:04:37.392 09:37:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.392 09:37:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.392 09:37:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.392 09:37:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.392 09:37:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.392 09:37:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.392 [2024-11-28 09:37:16.153001] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:37.392 [2024-11-28 09:37:16.153126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57797 ] 00:04:37.653 [2024-11-28 09:37:16.483654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.913 [2024-11-28 09:37:16.569911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.486 09:37:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.486 00:04:38.486 09:37:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:38.486 INFO: shutting down applications... 00:04:38.486 09:37:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.486 09:37:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57797 ]] 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57797 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:38.486 09:37:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.747 09:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.747 09:37:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.747 09:37:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:38.747 09:37:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.362 09:37:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.362 09:37:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.362 09:37:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:39.362 09:37:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.935 09:37:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.935 09:37:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.935 09:37:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:39.935 09:37:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.508 SPDK target shutdown done 00:04:40.508 09:37:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.508 Success 00:04:40.508 09:37:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.508 00:04:40.508 real 0m3.162s 00:04:40.508 user 0m2.796s 00:04:40.508 sys 0m0.400s 00:04:40.508 09:37:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.508 ************************************ 00:04:40.508 END TEST json_config_extra_key 00:04:40.508 ************************************ 00:04:40.508 09:37:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.508 09:37:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.508 09:37:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.508 09:37:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.508 09:37:19 -- common/autotest_common.sh@10 -- # set +x 00:04:40.508 ************************************ 00:04:40.508 START TEST alias_rpc 00:04:40.508 ************************************ 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.508 * Looking for test storage... 00:04:40.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.508 09:37:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.508 --rc genhtml_branch_coverage=1 00:04:40.508 --rc genhtml_function_coverage=1 00:04:40.508 --rc genhtml_legend=1 00:04:40.508 --rc geninfo_all_blocks=1 00:04:40.508 --rc geninfo_unexecuted_blocks=1 00:04:40.508 00:04:40.508 ' 00:04:40.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.508 --rc genhtml_branch_coverage=1 00:04:40.508 --rc genhtml_function_coverage=1 00:04:40.508 --rc genhtml_legend=1 00:04:40.508 --rc geninfo_all_blocks=1 00:04:40.508 --rc geninfo_unexecuted_blocks=1 00:04:40.508 00:04:40.508 ' 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.508 --rc genhtml_branch_coverage=1 00:04:40.508 --rc genhtml_function_coverage=1 00:04:40.508 --rc genhtml_legend=1 00:04:40.508 --rc geninfo_all_blocks=1 00:04:40.508 --rc geninfo_unexecuted_blocks=1 00:04:40.508 00:04:40.508 ' 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.508 --rc genhtml_branch_coverage=1 00:04:40.508 --rc genhtml_function_coverage=1 00:04:40.508 --rc genhtml_legend=1 00:04:40.508 --rc geninfo_all_blocks=1 00:04:40.508 --rc geninfo_unexecuted_blocks=1 00:04:40.508 00:04:40.508 ' 00:04:40.508 09:37:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.508 09:37:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57895 00:04:40.508 09:37:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57895 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57895 ']' 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.508 09:37:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.508 09:37:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.769 [2024-11-28 09:37:19.393366] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:40.769 [2024-11-28 09:37:19.393520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:04:40.769 [2024-11-28 09:37:19.557089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.031 [2024-11-28 09:37:19.651667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.602 09:37:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.602 09:37:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.602 09:37:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:41.860 09:37:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57895 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57895 ']' 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57895 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57895 00:04:41.860 killing process with pid 57895 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57895' 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 57895 00:04:41.860 09:37:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 57895 00:04:43.238 ************************************ 00:04:43.238 END TEST alias_rpc 00:04:43.238 ************************************ 00:04:43.238 00:04:43.238 real 0m2.571s 00:04:43.238 user 0m2.667s 00:04:43.238 sys 0m0.426s 00:04:43.238 09:37:21 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.238 09:37:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.238 09:37:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:43.238 09:37:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:43.238 09:37:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.238 09:37:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.238 09:37:21 -- common/autotest_common.sh@10 -- # set +x 00:04:43.238 ************************************ 00:04:43.238 START TEST spdkcli_tcp 00:04:43.238 ************************************ 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:43.238 * Looking for test storage... 00:04:43.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.238 09:37:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.238 --rc genhtml_branch_coverage=1 00:04:43.238 --rc genhtml_function_coverage=1 00:04:43.238 --rc genhtml_legend=1 00:04:43.238 --rc geninfo_all_blocks=1 00:04:43.238 --rc geninfo_unexecuted_blocks=1 00:04:43.238 00:04:43.238 ' 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.238 --rc genhtml_branch_coverage=1 00:04:43.238 --rc genhtml_function_coverage=1 00:04:43.238 --rc genhtml_legend=1 00:04:43.238 --rc geninfo_all_blocks=1 00:04:43.238 --rc geninfo_unexecuted_blocks=1 00:04:43.238 00:04:43.238 ' 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.238 --rc genhtml_branch_coverage=1 00:04:43.238 --rc genhtml_function_coverage=1 00:04:43.238 --rc genhtml_legend=1 00:04:43.238 --rc geninfo_all_blocks=1 00:04:43.238 --rc geninfo_unexecuted_blocks=1 00:04:43.238 00:04:43.238 ' 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.238 --rc genhtml_branch_coverage=1 00:04:43.238 --rc genhtml_function_coverage=1 00:04:43.238 --rc genhtml_legend=1 00:04:43.238 --rc geninfo_all_blocks=1 00:04:43.238 --rc geninfo_unexecuted_blocks=1 00:04:43.238 00:04:43.238 ' 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57981 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57981 00:04:43.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57981 ']' 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.238 09:37:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.238 09:37:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.238 [2024-11-28 09:37:21.985839] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:43.238 [2024-11-28 09:37:21.985955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57981 ] 00:04:43.496 [2024-11-28 09:37:22.141505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.496 [2024-11-28 09:37:22.217219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.496 [2024-11-28 09:37:22.217230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.063 09:37:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.063 09:37:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:44.063 09:37:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57998 00:04:44.063 09:37:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:44.063 09:37:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:44.321 [ 00:04:44.321 "bdev_malloc_delete", 00:04:44.321 "bdev_malloc_create", 00:04:44.321 "bdev_null_resize", 00:04:44.321 "bdev_null_delete", 00:04:44.321 "bdev_null_create", 00:04:44.321 "bdev_nvme_cuse_unregister", 00:04:44.321 "bdev_nvme_cuse_register", 00:04:44.321 "bdev_opal_new_user", 00:04:44.321 "bdev_opal_set_lock_state", 00:04:44.321 "bdev_opal_delete", 00:04:44.321 "bdev_opal_get_info", 00:04:44.321 "bdev_opal_create", 00:04:44.321 "bdev_nvme_opal_revert", 00:04:44.321 "bdev_nvme_opal_init", 00:04:44.321 "bdev_nvme_send_cmd", 00:04:44.321 "bdev_nvme_set_keys", 00:04:44.321 "bdev_nvme_get_path_iostat", 00:04:44.321 "bdev_nvme_get_mdns_discovery_info", 00:04:44.321 "bdev_nvme_stop_mdns_discovery", 00:04:44.321 "bdev_nvme_start_mdns_discovery", 00:04:44.321 "bdev_nvme_set_multipath_policy", 00:04:44.321 "bdev_nvme_set_preferred_path", 00:04:44.321 "bdev_nvme_get_io_paths", 00:04:44.321 "bdev_nvme_remove_error_injection", 00:04:44.321 "bdev_nvme_add_error_injection", 00:04:44.321 "bdev_nvme_get_discovery_info", 00:04:44.321 "bdev_nvme_stop_discovery", 00:04:44.321 "bdev_nvme_start_discovery", 00:04:44.321 "bdev_nvme_get_controller_health_info", 00:04:44.321 "bdev_nvme_disable_controller", 00:04:44.321 "bdev_nvme_enable_controller", 00:04:44.321 "bdev_nvme_reset_controller", 00:04:44.321 "bdev_nvme_get_transport_statistics", 00:04:44.321 "bdev_nvme_apply_firmware", 00:04:44.321 "bdev_nvme_detach_controller", 00:04:44.321 "bdev_nvme_get_controllers", 00:04:44.321 "bdev_nvme_attach_controller", 00:04:44.321 "bdev_nvme_set_hotplug", 00:04:44.321 "bdev_nvme_set_options", 00:04:44.321 "bdev_passthru_delete", 00:04:44.321 "bdev_passthru_create", 00:04:44.321 "bdev_lvol_set_parent_bdev", 00:04:44.321 "bdev_lvol_set_parent", 00:04:44.321 "bdev_lvol_check_shallow_copy", 00:04:44.321 "bdev_lvol_start_shallow_copy", 00:04:44.321 "bdev_lvol_grow_lvstore", 00:04:44.321 "bdev_lvol_get_lvols", 00:04:44.321 "bdev_lvol_get_lvstores", 00:04:44.321 "bdev_lvol_delete", 00:04:44.321 "bdev_lvol_set_read_only", 00:04:44.321 "bdev_lvol_resize", 00:04:44.321 "bdev_lvol_decouple_parent", 00:04:44.321 "bdev_lvol_inflate", 00:04:44.321 "bdev_lvol_rename", 00:04:44.321 "bdev_lvol_clone_bdev", 00:04:44.321 "bdev_lvol_clone", 00:04:44.321 "bdev_lvol_snapshot", 00:04:44.321 "bdev_lvol_create", 00:04:44.321 "bdev_lvol_delete_lvstore", 00:04:44.321 "bdev_lvol_rename_lvstore", 00:04:44.321 "bdev_lvol_create_lvstore", 00:04:44.321 "bdev_raid_set_options", 00:04:44.321 "bdev_raid_remove_base_bdev", 00:04:44.321 "bdev_raid_add_base_bdev", 00:04:44.321 "bdev_raid_delete", 00:04:44.321 "bdev_raid_create", 00:04:44.321 "bdev_raid_get_bdevs", 00:04:44.321 "bdev_error_inject_error", 00:04:44.321 "bdev_error_delete", 00:04:44.321 "bdev_error_create", 00:04:44.322 "bdev_split_delete", 00:04:44.322 "bdev_split_create", 00:04:44.322 "bdev_delay_delete", 00:04:44.322 "bdev_delay_create", 00:04:44.322 "bdev_delay_update_latency", 00:04:44.322 "bdev_zone_block_delete", 00:04:44.322 "bdev_zone_block_create", 00:04:44.322 "blobfs_create", 00:04:44.322 "blobfs_detect", 00:04:44.322 "blobfs_set_cache_size", 00:04:44.322 "bdev_xnvme_delete", 00:04:44.322 "bdev_xnvme_create", 00:04:44.322 "bdev_aio_delete", 00:04:44.322 "bdev_aio_rescan", 00:04:44.322 "bdev_aio_create", 00:04:44.322 "bdev_ftl_set_property", 00:04:44.322 "bdev_ftl_get_properties", 00:04:44.322 "bdev_ftl_get_stats", 00:04:44.322 "bdev_ftl_unmap", 00:04:44.322 "bdev_ftl_unload", 00:04:44.322 "bdev_ftl_delete", 00:04:44.322 "bdev_ftl_load", 00:04:44.322 "bdev_ftl_create", 00:04:44.322 "bdev_virtio_attach_controller", 00:04:44.322 "bdev_virtio_scsi_get_devices", 00:04:44.322 "bdev_virtio_detach_controller", 00:04:44.322 "bdev_virtio_blk_set_hotplug", 00:04:44.322 "bdev_iscsi_delete", 00:04:44.322 "bdev_iscsi_create", 00:04:44.322 "bdev_iscsi_set_options", 00:04:44.322 "accel_error_inject_error", 00:04:44.322 "ioat_scan_accel_module", 00:04:44.322 "dsa_scan_accel_module", 00:04:44.322 "iaa_scan_accel_module", 00:04:44.322 "keyring_file_remove_key", 00:04:44.322 "keyring_file_add_key", 00:04:44.322 "keyring_linux_set_options", 00:04:44.322 "fsdev_aio_delete", 00:04:44.322 "fsdev_aio_create", 00:04:44.322 "iscsi_get_histogram", 00:04:44.322 "iscsi_enable_histogram", 00:04:44.322 "iscsi_set_options", 00:04:44.322 "iscsi_get_auth_groups", 00:04:44.322 "iscsi_auth_group_remove_secret", 00:04:44.322 "iscsi_auth_group_add_secret", 00:04:44.322 "iscsi_delete_auth_group", 00:04:44.322 "iscsi_create_auth_group", 00:04:44.322 "iscsi_set_discovery_auth", 00:04:44.322 "iscsi_get_options", 00:04:44.322 "iscsi_target_node_request_logout", 00:04:44.322 "iscsi_target_node_set_redirect", 00:04:44.322 "iscsi_target_node_set_auth", 00:04:44.322 "iscsi_target_node_add_lun", 00:04:44.322 "iscsi_get_stats", 00:04:44.322 "iscsi_get_connections", 00:04:44.322 "iscsi_portal_group_set_auth", 00:04:44.322 "iscsi_start_portal_group", 00:04:44.322 "iscsi_delete_portal_group", 00:04:44.322 "iscsi_create_portal_group", 00:04:44.322 "iscsi_get_portal_groups", 00:04:44.322 "iscsi_delete_target_node", 00:04:44.322 "iscsi_target_node_remove_pg_ig_maps", 00:04:44.322 "iscsi_target_node_add_pg_ig_maps", 00:04:44.322 "iscsi_create_target_node", 00:04:44.322 "iscsi_get_target_nodes", 00:04:44.322 "iscsi_delete_initiator_group", 00:04:44.322 "iscsi_initiator_group_remove_initiators", 00:04:44.322 "iscsi_initiator_group_add_initiators", 00:04:44.322 "iscsi_create_initiator_group", 00:04:44.322 "iscsi_get_initiator_groups", 00:04:44.322 "nvmf_set_crdt", 00:04:44.322 "nvmf_set_config", 00:04:44.322 "nvmf_set_max_subsystems", 00:04:44.322 "nvmf_stop_mdns_prr", 00:04:44.322 "nvmf_publish_mdns_prr", 00:04:44.322 "nvmf_subsystem_get_listeners", 00:04:44.322 "nvmf_subsystem_get_qpairs", 00:04:44.322 "nvmf_subsystem_get_controllers", 00:04:44.322 "nvmf_get_stats", 00:04:44.322 "nvmf_get_transports", 00:04:44.322 "nvmf_create_transport", 00:04:44.322 "nvmf_get_targets", 00:04:44.322 "nvmf_delete_target", 00:04:44.322 "nvmf_create_target", 00:04:44.322 "nvmf_subsystem_allow_any_host", 00:04:44.322 "nvmf_subsystem_set_keys", 00:04:44.322 "nvmf_subsystem_remove_host", 00:04:44.322 "nvmf_subsystem_add_host", 00:04:44.322 "nvmf_ns_remove_host", 00:04:44.322 "nvmf_ns_add_host", 00:04:44.322 "nvmf_subsystem_remove_ns", 00:04:44.322 "nvmf_subsystem_set_ns_ana_group", 00:04:44.322 "nvmf_subsystem_add_ns", 00:04:44.322 "nvmf_subsystem_listener_set_ana_state", 00:04:44.322 "nvmf_discovery_get_referrals", 00:04:44.322 "nvmf_discovery_remove_referral", 00:04:44.322 "nvmf_discovery_add_referral", 00:04:44.322 "nvmf_subsystem_remove_listener", 00:04:44.322 "nvmf_subsystem_add_listener", 00:04:44.322 "nvmf_delete_subsystem", 00:04:44.322 "nvmf_create_subsystem", 00:04:44.322 "nvmf_get_subsystems", 00:04:44.322 "env_dpdk_get_mem_stats", 00:04:44.322 "nbd_get_disks", 00:04:44.322 "nbd_stop_disk", 00:04:44.322 "nbd_start_disk", 00:04:44.322 "ublk_recover_disk", 00:04:44.322 "ublk_get_disks", 00:04:44.322 "ublk_stop_disk", 00:04:44.322 "ublk_start_disk", 00:04:44.322 "ublk_destroy_target", 00:04:44.322 "ublk_create_target", 00:04:44.322 "virtio_blk_create_transport", 00:04:44.322 "virtio_blk_get_transports", 00:04:44.322 "vhost_controller_set_coalescing", 00:04:44.322 "vhost_get_controllers", 00:04:44.322 "vhost_delete_controller", 00:04:44.322 "vhost_create_blk_controller", 00:04:44.322 "vhost_scsi_controller_remove_target", 00:04:44.322 "vhost_scsi_controller_add_target", 00:04:44.322 "vhost_start_scsi_controller", 00:04:44.322 "vhost_create_scsi_controller", 00:04:44.322 "thread_set_cpumask", 00:04:44.322 "scheduler_set_options", 00:04:44.322 "framework_get_governor", 00:04:44.322 "framework_get_scheduler", 00:04:44.322 "framework_set_scheduler", 00:04:44.322 "framework_get_reactors", 00:04:44.322 "thread_get_io_channels", 00:04:44.322 "thread_get_pollers", 00:04:44.322 "thread_get_stats", 00:04:44.322 "framework_monitor_context_switch", 00:04:44.322 "spdk_kill_instance", 00:04:44.322 "log_enable_timestamps", 00:04:44.322 "log_get_flags", 00:04:44.322 "log_clear_flag", 00:04:44.322 "log_set_flag", 00:04:44.322 "log_get_level", 00:04:44.322 "log_set_level", 00:04:44.322 "log_get_print_level", 00:04:44.322 "log_set_print_level", 00:04:44.322 "framework_enable_cpumask_locks", 00:04:44.322 "framework_disable_cpumask_locks", 00:04:44.322 "framework_wait_init", 00:04:44.322 "framework_start_init", 00:04:44.322 "scsi_get_devices", 00:04:44.322 "bdev_get_histogram", 00:04:44.322 "bdev_enable_histogram", 00:04:44.322 "bdev_set_qos_limit", 00:04:44.322 "bdev_set_qd_sampling_period", 00:04:44.322 "bdev_get_bdevs", 00:04:44.322 "bdev_reset_iostat", 00:04:44.322 "bdev_get_iostat", 00:04:44.322 "bdev_examine", 00:04:44.322 "bdev_wait_for_examine", 00:04:44.322 "bdev_set_options", 00:04:44.322 "accel_get_stats", 00:04:44.322 "accel_set_options", 00:04:44.322 "accel_set_driver", 00:04:44.322 "accel_crypto_key_destroy", 00:04:44.322 "accel_crypto_keys_get", 00:04:44.322 "accel_crypto_key_create", 00:04:44.322 "accel_assign_opc", 00:04:44.322 "accel_get_module_info", 00:04:44.322 "accel_get_opc_assignments", 00:04:44.322 "vmd_rescan", 00:04:44.322 "vmd_remove_device", 00:04:44.322 "vmd_enable", 00:04:44.322 "sock_get_default_impl", 00:04:44.322 "sock_set_default_impl", 00:04:44.322 "sock_impl_set_options", 00:04:44.322 "sock_impl_get_options", 00:04:44.322 "iobuf_get_stats", 00:04:44.322 "iobuf_set_options", 00:04:44.322 "keyring_get_keys", 00:04:44.322 "framework_get_pci_devices", 00:04:44.322 "framework_get_config", 00:04:44.322 "framework_get_subsystems", 00:04:44.322 "fsdev_set_opts", 00:04:44.322 "fsdev_get_opts", 00:04:44.322 "trace_get_info", 00:04:44.322 "trace_get_tpoint_group_mask", 00:04:44.322 "trace_disable_tpoint_group", 00:04:44.322 "trace_enable_tpoint_group", 00:04:44.322 "trace_clear_tpoint_mask", 00:04:44.322 "trace_set_tpoint_mask", 00:04:44.322 "notify_get_notifications", 00:04:44.322 "notify_get_types", 00:04:44.322 "spdk_get_version", 00:04:44.322 "rpc_get_methods" 00:04:44.322 ] 00:04:44.322 09:37:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.322 09:37:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:44.322 09:37:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57981 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57981 ']' 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57981 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57981 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.322 killing process with pid 57981 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57981' 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57981 00:04:44.322 09:37:23 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57981 00:04:45.700 00:04:45.700 real 0m2.451s 00:04:45.700 user 0m4.445s 00:04:45.700 sys 0m0.393s 00:04:45.700 ************************************ 00:04:45.700 END TEST spdkcli_tcp 00:04:45.700 ************************************ 00:04:45.700 09:37:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.700 09:37:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.700 09:37:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.700 09:37:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.700 09:37:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.700 09:37:24 -- common/autotest_common.sh@10 -- # set +x 00:04:45.700 ************************************ 00:04:45.700 START TEST dpdk_mem_utility 00:04:45.700 ************************************ 00:04:45.700 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.700 * Looking for test storage... 00:04:45.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:45.700 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.700 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.700 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.700 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.700 09:37:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:45.701 09:37:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.701 09:37:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.701 09:37:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.701 09:37:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.701 --rc genhtml_branch_coverage=1 00:04:45.701 --rc genhtml_function_coverage=1 00:04:45.701 --rc genhtml_legend=1 00:04:45.701 --rc geninfo_all_blocks=1 00:04:45.701 --rc geninfo_unexecuted_blocks=1 00:04:45.701 00:04:45.701 ' 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.701 --rc genhtml_branch_coverage=1 00:04:45.701 --rc genhtml_function_coverage=1 00:04:45.701 --rc genhtml_legend=1 00:04:45.701 --rc geninfo_all_blocks=1 00:04:45.701 --rc geninfo_unexecuted_blocks=1 00:04:45.701 00:04:45.701 ' 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.701 --rc genhtml_branch_coverage=1 00:04:45.701 --rc genhtml_function_coverage=1 00:04:45.701 --rc genhtml_legend=1 00:04:45.701 --rc geninfo_all_blocks=1 00:04:45.701 --rc geninfo_unexecuted_blocks=1 00:04:45.701 00:04:45.701 ' 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.701 --rc genhtml_branch_coverage=1 00:04:45.701 --rc genhtml_function_coverage=1 00:04:45.701 --rc genhtml_legend=1 00:04:45.701 --rc geninfo_all_blocks=1 00:04:45.701 --rc geninfo_unexecuted_blocks=1 00:04:45.701 00:04:45.701 ' 00:04:45.701 09:37:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:45.701 09:37:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58092 00:04:45.701 09:37:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58092 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58092 ']' 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.701 09:37:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.701 09:37:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.701 [2024-11-28 09:37:24.483255] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:45.701 [2024-11-28 09:37:24.483347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58092 ] 00:04:45.960 [2024-11-28 09:37:24.631409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.960 [2024-11-28 09:37:24.706299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.525 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.525 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:46.525 09:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.525 09:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.525 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.525 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.525 { 00:04:46.526 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.526 } 00:04:46.526 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.526 09:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.526 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:46.526 1 heaps totaling size 824.000000 MiB 00:04:46.526 size: 824.000000 MiB heap id: 0 00:04:46.526 end heaps---------- 00:04:46.526 9 mempools totaling size 603.782043 MiB 00:04:46.526 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.526 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.526 size: 100.555481 MiB name: bdev_io_58092 00:04:46.526 size: 50.003479 MiB name: msgpool_58092 00:04:46.526 size: 36.509338 MiB name: fsdev_io_58092 00:04:46.526 size: 21.763794 MiB name: PDU_Pool 00:04:46.526 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.526 size: 4.133484 MiB name: evtpool_58092 00:04:46.526 size: 0.026123 MiB name: Session_Pool 00:04:46.526 end mempools------- 00:04:46.526 6 memzones totaling size 4.142822 MiB 00:04:46.526 size: 1.000366 MiB name: RG_ring_0_58092 00:04:46.526 size: 1.000366 MiB name: RG_ring_1_58092 00:04:46.526 size: 1.000366 MiB name: RG_ring_4_58092 00:04:46.526 size: 1.000366 MiB name: RG_ring_5_58092 00:04:46.526 size: 0.125366 MiB name: RG_ring_2_58092 00:04:46.526 size: 0.015991 MiB name: RG_ring_3_58092 00:04:46.526 end memzones------- 00:04:46.526 09:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.785 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:04:46.785 list of free elements. size: 16.779419 MiB 00:04:46.785 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:46.785 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:46.785 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:46.785 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:46.785 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:46.785 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:46.785 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:46.785 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:46.785 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:46.785 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:46.785 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:46.785 element at address: 0x20001b400000 with size: 0.559509 MiB 00:04:46.785 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:46.785 element at address: 0x200019600000 with size: 0.488220 MiB 00:04:46.785 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:46.785 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:46.785 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:46.785 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:46.785 list of standard malloc elements. size: 199.289673 MiB 00:04:46.785 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:46.785 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:46.785 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:46.785 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:46.785 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:46.785 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:46.785 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:46.785 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:46.785 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:46.785 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:46.785 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:46.785 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:46.785 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:46.785 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:46.786 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:46.787 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:46.787 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:46.787 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:46.787 list of memzone associated elements. size: 607.930908 MiB 00:04:46.787 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:46.787 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.787 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:46.787 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.787 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:46.787 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58092_0 00:04:46.787 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:46.787 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58092_0 00:04:46.787 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:46.787 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58092_0 00:04:46.787 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:46.787 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.787 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:46.787 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.787 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:46.787 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58092_0 00:04:46.787 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:46.787 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58092 00:04:46.787 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:46.787 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58092 00:04:46.787 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:46.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.787 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:46.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.787 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:46.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.787 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:46.787 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.787 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:46.787 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58092 00:04:46.787 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:46.787 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58092 00:04:46.787 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:46.787 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58092 00:04:46.787 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:46.787 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58092 00:04:46.787 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:46.787 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58092 00:04:46.787 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:46.787 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58092 00:04:46.787 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:46.787 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.787 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:46.787 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.787 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:46.787 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.787 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:46.787 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58092 00:04:46.787 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:46.787 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58092 00:04:46.787 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:46.787 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.787 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:46.787 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.787 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:46.787 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58092 00:04:46.787 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:46.787 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.787 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:46.787 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58092 00:04:46.787 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:46.787 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58092 00:04:46.787 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:46.787 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58092 00:04:46.788 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:46.788 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.788 09:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.788 09:37:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58092 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58092 ']' 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58092 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58092 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.788 killing process with pid 58092 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58092' 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58092 00:04:46.788 09:37:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58092 00:04:47.733 00:04:47.733 real 0m2.323s 00:04:47.733 user 0m2.355s 00:04:47.733 sys 0m0.364s 00:04:47.733 09:37:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.733 ************************************ 00:04:47.733 END TEST dpdk_mem_utility 00:04:47.733 ************************************ 00:04:47.733 09:37:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.994 09:37:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:47.994 09:37:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.994 09:37:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.994 09:37:26 -- common/autotest_common.sh@10 -- # set +x 00:04:47.994 ************************************ 00:04:47.994 START TEST event 00:04:47.994 ************************************ 00:04:47.994 09:37:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:47.994 * Looking for test storage... 00:04:47.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:47.994 09:37:26 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.994 09:37:26 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.994 09:37:26 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.994 09:37:26 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.994 09:37:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.994 09:37:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.994 09:37:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.994 09:37:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.994 09:37:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.994 09:37:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.994 09:37:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.994 09:37:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.994 09:37:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.994 09:37:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.994 09:37:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.994 09:37:26 event -- scripts/common.sh@344 -- # case "$op" in 00:04:47.994 09:37:26 event -- scripts/common.sh@345 -- # : 1 00:04:47.994 09:37:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.994 09:37:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.994 09:37:26 event -- scripts/common.sh@365 -- # decimal 1 00:04:47.994 09:37:26 event -- scripts/common.sh@353 -- # local d=1 00:04:47.994 09:37:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.994 09:37:26 event -- scripts/common.sh@355 -- # echo 1 00:04:47.994 09:37:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.995 09:37:26 event -- scripts/common.sh@366 -- # decimal 2 00:04:47.995 09:37:26 event -- scripts/common.sh@353 -- # local d=2 00:04:47.995 09:37:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.995 09:37:26 event -- scripts/common.sh@355 -- # echo 2 00:04:47.995 09:37:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.995 09:37:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.995 09:37:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.995 09:37:26 event -- scripts/common.sh@368 -- # return 0 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.995 --rc genhtml_branch_coverage=1 00:04:47.995 --rc genhtml_function_coverage=1 00:04:47.995 --rc genhtml_legend=1 00:04:47.995 --rc geninfo_all_blocks=1 00:04:47.995 --rc geninfo_unexecuted_blocks=1 00:04:47.995 00:04:47.995 ' 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.995 --rc genhtml_branch_coverage=1 00:04:47.995 --rc genhtml_function_coverage=1 00:04:47.995 --rc genhtml_legend=1 00:04:47.995 --rc geninfo_all_blocks=1 00:04:47.995 --rc geninfo_unexecuted_blocks=1 00:04:47.995 00:04:47.995 ' 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.995 --rc genhtml_branch_coverage=1 00:04:47.995 --rc genhtml_function_coverage=1 00:04:47.995 --rc genhtml_legend=1 00:04:47.995 --rc geninfo_all_blocks=1 00:04:47.995 --rc geninfo_unexecuted_blocks=1 00:04:47.995 00:04:47.995 ' 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.995 --rc genhtml_branch_coverage=1 00:04:47.995 --rc genhtml_function_coverage=1 00:04:47.995 --rc genhtml_legend=1 00:04:47.995 --rc geninfo_all_blocks=1 00:04:47.995 --rc geninfo_unexecuted_blocks=1 00:04:47.995 00:04:47.995 ' 00:04:47.995 09:37:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:47.995 09:37:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.995 09:37:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:47.995 09:37:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.995 09:37:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.995 ************************************ 00:04:47.995 START TEST event_perf 00:04:47.995 ************************************ 00:04:47.995 09:37:26 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.995 Running I/O for 1 seconds...[2024-11-28 09:37:26.824032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:47.995 [2024-11-28 09:37:26.824140] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:04:48.254 [2024-11-28 09:37:26.979369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.254 [2024-11-28 09:37:27.056754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.254 Running I/O for 1 seconds...[2024-11-28 09:37:27.057373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.254 [2024-11-28 09:37:27.057681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.254 [2024-11-28 09:37:27.057706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.630 00:04:49.630 lcore 0: 216855 00:04:49.630 lcore 1: 216855 00:04:49.630 lcore 2: 216858 00:04:49.630 lcore 3: 216858 00:04:49.630 done. 00:04:49.630 00:04:49.630 real 0m1.392s 00:04:49.630 user 0m4.194s 00:04:49.630 sys 0m0.081s 00:04:49.630 ************************************ 00:04:49.630 END TEST event_perf 00:04:49.630 ************************************ 00:04:49.630 09:37:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.630 09:37:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 09:37:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.630 09:37:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:49.630 09:37:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.630 09:37:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 ************************************ 00:04:49.630 START TEST event_reactor 00:04:49.630 ************************************ 00:04:49.630 09:37:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.630 [2024-11-28 09:37:28.258812] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:49.630 [2024-11-28 09:37:28.258891] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58217 ] 00:04:49.630 [2024-11-28 09:37:28.406739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.631 [2024-11-28 09:37:28.481032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.005 test_start 00:04:51.005 oneshot 00:04:51.005 tick 100 00:04:51.005 tick 100 00:04:51.005 tick 250 00:04:51.005 tick 100 00:04:51.005 tick 100 00:04:51.005 tick 100 00:04:51.005 tick 250 00:04:51.005 tick 500 00:04:51.005 tick 100 00:04:51.005 tick 100 00:04:51.005 tick 250 00:04:51.005 tick 100 00:04:51.005 tick 100 00:04:51.005 test_end 00:04:51.005 00:04:51.005 real 0m1.369s 00:04:51.005 user 0m1.208s 00:04:51.005 sys 0m0.054s 00:04:51.005 09:37:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.005 09:37:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:51.005 ************************************ 00:04:51.005 END TEST event_reactor 00:04:51.005 ************************************ 00:04:51.005 09:37:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.005 09:37:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:51.005 09:37:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.005 09:37:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.005 ************************************ 00:04:51.005 START TEST event_reactor_perf 00:04:51.005 ************************************ 00:04:51.005 09:37:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.005 [2024-11-28 09:37:29.679389] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:51.005 [2024-11-28 09:37:29.679822] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58254 ] 00:04:51.005 [2024-11-28 09:37:29.835011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.263 [2024-11-28 09:37:29.912278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.199 test_start 00:04:52.199 test_end 00:04:52.199 Performance: 421130 events per second 00:04:52.199 00:04:52.199 real 0m1.382s 00:04:52.199 user 0m1.206s 00:04:52.199 sys 0m0.068s 00:04:52.199 09:37:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.199 09:37:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.199 ************************************ 00:04:52.199 END TEST event_reactor_perf 00:04:52.199 ************************************ 00:04:52.458 09:37:31 event -- event/event.sh@49 -- # uname -s 00:04:52.458 09:37:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.458 09:37:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.458 09:37:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.458 09:37:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.458 09:37:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.458 ************************************ 00:04:52.458 START TEST event_scheduler 00:04:52.458 ************************************ 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.458 * Looking for test storage... 00:04:52.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.458 09:37:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.458 --rc genhtml_branch_coverage=1 00:04:52.458 --rc genhtml_function_coverage=1 00:04:52.458 --rc genhtml_legend=1 00:04:52.458 --rc geninfo_all_blocks=1 00:04:52.458 --rc geninfo_unexecuted_blocks=1 00:04:52.458 00:04:52.458 ' 00:04:52.458 09:37:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.458 --rc genhtml_branch_coverage=1 00:04:52.458 --rc genhtml_function_coverage=1 00:04:52.459 --rc genhtml_legend=1 00:04:52.459 --rc geninfo_all_blocks=1 00:04:52.459 --rc geninfo_unexecuted_blocks=1 00:04:52.459 00:04:52.459 ' 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.459 --rc genhtml_branch_coverage=1 00:04:52.459 --rc genhtml_function_coverage=1 00:04:52.459 --rc genhtml_legend=1 00:04:52.459 --rc geninfo_all_blocks=1 00:04:52.459 --rc geninfo_unexecuted_blocks=1 00:04:52.459 00:04:52.459 ' 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.459 --rc genhtml_branch_coverage=1 00:04:52.459 --rc genhtml_function_coverage=1 00:04:52.459 --rc genhtml_legend=1 00:04:52.459 --rc geninfo_all_blocks=1 00:04:52.459 --rc geninfo_unexecuted_blocks=1 00:04:52.459 00:04:52.459 ' 00:04:52.459 09:37:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.459 09:37:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58319 00:04:52.459 09:37:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.459 09:37:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58319 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58319 ']' 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.459 09:37:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.459 09:37:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.459 [2024-11-28 09:37:31.301878] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:52.459 [2024-11-28 09:37:31.301995] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:04:52.717 [2024-11-28 09:37:31.463904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.717 [2024-11-28 09:37:31.563779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.717 [2024-11-28 09:37:31.564416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.717 [2024-11-28 09:37:31.564677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.717 [2024-11-28 09:37:31.564754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.287 09:37:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.287 09:37:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:53.287 09:37:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:53.287 09:37:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.287 09:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.287 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.287 POWER: Cannot set governor of lcore 0 to performance 00:04:53.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.287 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.287 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.287 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:53.287 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:53.287 POWER: Unable to set Power Management Environment for lcore 0 00:04:53.287 [2024-11-28 09:37:32.102252] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:53.287 [2024-11-28 09:37:32.102269] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:53.287 [2024-11-28 09:37:32.102278] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:53.287 [2024-11-28 09:37:32.102295] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:53.287 [2024-11-28 09:37:32.102303] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:53.287 [2024-11-28 09:37:32.102312] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:53.287 09:37:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.287 09:37:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:53.288 09:37:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.288 09:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 [2024-11-28 09:37:32.344240] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:53.568 09:37:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:53.568 09:37:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.568 09:37:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 ************************************ 00:04:53.568 START TEST scheduler_create_thread 00:04:53.568 ************************************ 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 2 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 3 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 4 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 5 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 6 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 7 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 8 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.568 9 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.568 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.878 10 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.878 09:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.253 09:37:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.253 09:37:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.253 09:37:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.253 09:37:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.253 09:37:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.186 09:37:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.186 00:04:56.186 real 0m2.614s 00:04:56.186 user 0m0.013s 00:04:56.186 sys 0m0.009s 00:04:56.186 09:37:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.186 09:37:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.186 ************************************ 00:04:56.186 END TEST scheduler_create_thread 00:04:56.186 ************************************ 00:04:56.186 09:37:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.186 09:37:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58319 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58319 ']' 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58319 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58319 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:56.186 killing process with pid 58319 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58319' 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58319 00:04:56.186 09:37:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58319 00:04:56.750 [2024-11-28 09:37:35.450080] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.317 00:04:57.317 real 0m4.933s 00:04:57.317 user 0m8.510s 00:04:57.317 sys 0m0.357s 00:04:57.317 ************************************ 00:04:57.317 END TEST event_scheduler 00:04:57.317 ************************************ 00:04:57.317 09:37:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.317 09:37:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.317 09:37:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.317 09:37:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.317 09:37:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.317 09:37:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.317 09:37:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.317 ************************************ 00:04:57.317 START TEST app_repeat 00:04:57.317 ************************************ 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58425 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.317 Process app_repeat pid: 58425 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58425' 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.317 spdk_app_start Round 0 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58425 /var/tmp/spdk-nbd.sock 00:04:57.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58425 ']' 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.317 09:37:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.317 09:37:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.317 [2024-11-28 09:37:36.130007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:57.317 [2024-11-28 09:37:36.130115] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58425 ] 00:04:57.575 [2024-11-28 09:37:36.286681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.575 [2024-11-28 09:37:36.362556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.575 [2024-11-28 09:37:36.362632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.141 09:37:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.141 09:37:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:58.141 09:37:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.399 Malloc0 00:04:58.399 09:37:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.657 Malloc1 00:04:58.657 09:37:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.657 09:37:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.916 /dev/nbd0 00:04:58.916 09:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.916 09:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.916 1+0 records in 00:04:58.916 1+0 records out 00:04:58.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156014 s, 26.3 MB/s 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.916 09:37:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.916 09:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.916 09:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.916 09:37:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.175 /dev/nbd1 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.175 1+0 records in 00:04:59.175 1+0 records out 00:04:59.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222795 s, 18.4 MB/s 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.175 09:37:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.175 09:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.434 { 00:04:59.434 "nbd_device": "/dev/nbd0", 00:04:59.434 "bdev_name": "Malloc0" 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "nbd_device": "/dev/nbd1", 00:04:59.434 "bdev_name": "Malloc1" 00:04:59.434 } 00:04:59.434 ]' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.434 { 00:04:59.434 "nbd_device": "/dev/nbd0", 00:04:59.434 "bdev_name": "Malloc0" 00:04:59.434 }, 00:04:59.434 { 00:04:59.434 "nbd_device": "/dev/nbd1", 00:04:59.434 "bdev_name": "Malloc1" 00:04:59.434 } 00:04:59.434 ]' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.434 /dev/nbd1' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.434 /dev/nbd1' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.434 256+0 records in 00:04:59.434 256+0 records out 00:04:59.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111804 s, 93.8 MB/s 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.434 256+0 records in 00:04:59.434 256+0 records out 00:04:59.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162913 s, 64.4 MB/s 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.434 256+0 records in 00:04:59.434 256+0 records out 00:04:59.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157721 s, 66.5 MB/s 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.434 09:37:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.693 09:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.693 09:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.693 09:37:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.693 09:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.694 09:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.694 09:37:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.694 09:37:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.694 09:37:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.694 09:37:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.694 09:37:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.952 09:37:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.210 09:37:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.210 09:37:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.468 09:37:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.034 [2024-11-28 09:37:39.721214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.034 [2024-11-28 09:37:39.789189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.034 [2024-11-28 09:37:39.789207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.034 [2024-11-28 09:37:39.885679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.034 [2024-11-28 09:37:39.885732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.564 09:37:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.564 spdk_app_start Round 1 00:05:03.564 09:37:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.564 09:37:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58425 /var/tmp/spdk-nbd.sock 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58425 ']' 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.564 09:37:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:03.564 09:37:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.822 Malloc0 00:05:03.822 09:37:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.080 Malloc1 00:05:04.080 09:37:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.080 09:37:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.080 09:37:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.081 09:37:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.339 /dev/nbd0 00:05:04.339 09:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.339 09:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.339 1+0 records in 00:05:04.339 1+0 records out 00:05:04.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026669 s, 15.4 MB/s 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.339 09:37:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.339 09:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.339 09:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.339 09:37:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.597 /dev/nbd1 00:05:04.597 09:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.597 09:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.597 1+0 records in 00:05:04.597 1+0 records out 00:05:04.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262841 s, 15.6 MB/s 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.597 09:37:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.597 09:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.598 09:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.598 09:37:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.598 09:37:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.598 09:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.856 { 00:05:04.856 "nbd_device": "/dev/nbd0", 00:05:04.856 "bdev_name": "Malloc0" 00:05:04.856 }, 00:05:04.856 { 00:05:04.856 "nbd_device": "/dev/nbd1", 00:05:04.856 "bdev_name": "Malloc1" 00:05:04.856 } 00:05:04.856 ]' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.856 { 00:05:04.856 "nbd_device": "/dev/nbd0", 00:05:04.856 "bdev_name": "Malloc0" 00:05:04.856 }, 00:05:04.856 { 00:05:04.856 "nbd_device": "/dev/nbd1", 00:05:04.856 "bdev_name": "Malloc1" 00:05:04.856 } 00:05:04.856 ]' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.856 /dev/nbd1' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.856 /dev/nbd1' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.856 256+0 records in 00:05:04.856 256+0 records out 00:05:04.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588881 s, 178 MB/s 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.856 256+0 records in 00:05:04.856 256+0 records out 00:05:04.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136075 s, 77.1 MB/s 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.856 256+0 records in 00:05:04.856 256+0 records out 00:05:04.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161867 s, 64.8 MB/s 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.856 09:37:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.115 09:37:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.374 09:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.632 09:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.632 09:37:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.632 09:37:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.632 09:37:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.632 09:37:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.632 09:37:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.890 09:37:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.455 [2024-11-28 09:37:45.094184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.455 [2024-11-28 09:37:45.162148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.455 [2024-11-28 09:37:45.162188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.455 [2024-11-28 09:37:45.263230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.455 [2024-11-28 09:37:45.263276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.986 09:37:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.986 spdk_app_start Round 2 00:05:08.986 09:37:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:08.986 09:37:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58425 /var/tmp/spdk-nbd.sock 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58425 ']' 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.986 09:37:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.986 09:37:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.245 Malloc0 00:05:09.245 09:37:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.503 Malloc1 00:05:09.503 09:37:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.503 09:37:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.762 /dev/nbd0 00:05:09.762 09:37:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.762 09:37:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.762 1+0 records in 00:05:09.762 1+0 records out 00:05:09.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188944 s, 21.7 MB/s 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.762 09:37:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.762 09:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.762 09:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.762 09:37:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.762 /dev/nbd1 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.021 1+0 records in 00:05:10.021 1+0 records out 00:05:10.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255049 s, 16.1 MB/s 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.021 09:37:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.021 { 00:05:10.021 "nbd_device": "/dev/nbd0", 00:05:10.021 "bdev_name": "Malloc0" 00:05:10.021 }, 00:05:10.021 { 00:05:10.021 "nbd_device": "/dev/nbd1", 00:05:10.021 "bdev_name": "Malloc1" 00:05:10.021 } 00:05:10.021 ]' 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.021 { 00:05:10.021 "nbd_device": "/dev/nbd0", 00:05:10.021 "bdev_name": "Malloc0" 00:05:10.021 }, 00:05:10.021 { 00:05:10.021 "nbd_device": "/dev/nbd1", 00:05:10.021 "bdev_name": "Malloc1" 00:05:10.021 } 00:05:10.021 ]' 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.021 09:37:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.021 /dev/nbd1' 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.279 /dev/nbd1' 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.279 09:37:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.280 256+0 records in 00:05:10.280 256+0 records out 00:05:10.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474772 s, 221 MB/s 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.280 256+0 records in 00:05:10.280 256+0 records out 00:05:10.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145499 s, 72.1 MB/s 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.280 256+0 records in 00:05:10.280 256+0 records out 00:05:10.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196757 s, 53.3 MB/s 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.280 09:37:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.538 09:37:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.797 09:37:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.797 09:37:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.363 09:37:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.621 [2024-11-28 09:37:50.483725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.880 [2024-11-28 09:37:50.549777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.880 [2024-11-28 09:37:50.549878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.880 [2024-11-28 09:37:50.651944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.880 [2024-11-28 09:37:50.651993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.404 09:37:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58425 /var/tmp/spdk-nbd.sock 00:05:14.404 09:37:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58425 ']' 00:05:14.404 09:37:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.404 09:37:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.404 09:37:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.404 09:37:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.404 09:37:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.404 09:37:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58425 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58425 ']' 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58425 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58425 00:05:14.404 killing process with pid 58425 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58425' 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58425 00:05:14.404 09:37:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58425 00:05:14.973 spdk_app_start is called in Round 0. 00:05:14.973 Shutdown signal received, stop current app iteration 00:05:14.973 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:14.973 spdk_app_start is called in Round 1. 00:05:14.973 Shutdown signal received, stop current app iteration 00:05:14.973 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:14.973 spdk_app_start is called in Round 2. 00:05:14.973 Shutdown signal received, stop current app iteration 00:05:14.973 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:14.973 spdk_app_start is called in Round 3. 00:05:14.973 Shutdown signal received, stop current app iteration 00:05:14.973 ************************************ 00:05:14.973 END TEST app_repeat 00:05:14.973 ************************************ 00:05:14.973 09:37:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.973 09:37:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:14.973 00:05:14.973 real 0m17.596s 00:05:14.973 user 0m38.651s 00:05:14.973 sys 0m2.015s 00:05:14.973 09:37:53 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.973 09:37:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 09:37:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.973 09:37:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.973 09:37:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.973 09:37:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.973 09:37:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 ************************************ 00:05:14.973 START TEST cpu_locks 00:05:14.973 ************************************ 00:05:14.973 09:37:53 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.973 * Looking for test storage... 00:05:14.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.973 09:37:53 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.973 09:37:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.973 09:37:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.233 09:37:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.233 --rc genhtml_branch_coverage=1 00:05:15.233 --rc genhtml_function_coverage=1 00:05:15.233 --rc genhtml_legend=1 00:05:15.233 --rc geninfo_all_blocks=1 00:05:15.233 --rc geninfo_unexecuted_blocks=1 00:05:15.233 00:05:15.233 ' 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.233 --rc genhtml_branch_coverage=1 00:05:15.233 --rc genhtml_function_coverage=1 00:05:15.233 --rc genhtml_legend=1 00:05:15.233 --rc geninfo_all_blocks=1 00:05:15.233 --rc geninfo_unexecuted_blocks=1 00:05:15.233 00:05:15.233 ' 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.233 --rc genhtml_branch_coverage=1 00:05:15.233 --rc genhtml_function_coverage=1 00:05:15.233 --rc genhtml_legend=1 00:05:15.233 --rc geninfo_all_blocks=1 00:05:15.233 --rc geninfo_unexecuted_blocks=1 00:05:15.233 00:05:15.233 ' 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.233 --rc genhtml_branch_coverage=1 00:05:15.233 --rc genhtml_function_coverage=1 00:05:15.233 --rc genhtml_legend=1 00:05:15.233 --rc geninfo_all_blocks=1 00:05:15.233 --rc geninfo_unexecuted_blocks=1 00:05:15.233 00:05:15.233 ' 00:05:15.233 09:37:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.233 09:37:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.233 09:37:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.233 09:37:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.233 09:37:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.233 ************************************ 00:05:15.233 START TEST default_locks 00:05:15.233 ************************************ 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58850 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58850 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58850 ']' 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.233 09:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.233 [2024-11-28 09:37:53.964964] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:15.233 [2024-11-28 09:37:53.965077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:05:15.492 [2024-11-28 09:37:54.121781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.492 [2024-11-28 09:37:54.195125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58850 ']' 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.058 killing process with pid 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58850' 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58850 00:05:16.058 09:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58850 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58850 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58850 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58850 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58850 ']' 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.433 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58850) - No such process 00:05:17.433 ERROR: process (pid: 58850) is no longer running 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.433 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.434 00:05:17.434 real 0m2.192s 00:05:17.434 user 0m2.156s 00:05:17.434 sys 0m0.386s 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.434 09:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.434 ************************************ 00:05:17.434 END TEST default_locks 00:05:17.434 ************************************ 00:05:17.434 09:37:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:17.434 09:37:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.434 09:37:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.434 09:37:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.434 ************************************ 00:05:17.434 START TEST default_locks_via_rpc 00:05:17.434 ************************************ 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58914 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58914 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58914 ']' 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.434 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.434 [2024-11-28 09:37:56.196109] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:17.434 [2024-11-28 09:37:56.196238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58914 ] 00:05:17.693 [2024-11-28 09:37:56.350766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.693 [2024-11-28 09:37:56.425396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58914 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58914 00:05:18.259 09:37:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58914 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58914 ']' 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58914 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58914 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.518 killing process with pid 58914 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58914' 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58914 00:05:18.518 09:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58914 00:05:19.892 00:05:19.892 real 0m2.257s 00:05:19.892 user 0m2.264s 00:05:19.892 sys 0m0.391s 00:05:19.892 09:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.892 ************************************ 00:05:19.892 END TEST default_locks_via_rpc 00:05:19.892 ************************************ 00:05:19.892 09:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 09:37:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:19.892 09:37:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.892 09:37:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.892 09:37:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 ************************************ 00:05:19.892 START TEST non_locking_app_on_locked_coremask 00:05:19.892 ************************************ 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58966 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58966 /var/tmp/spdk.sock 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58966 ']' 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.892 09:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.892 [2024-11-28 09:37:58.505363] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:19.892 [2024-11-28 09:37:58.505453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:05:19.892 [2024-11-28 09:37:58.654413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.892 [2024-11-28 09:37:58.729061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58982 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58982 /var/tmp/spdk2.sock 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58982 ']' 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.824 09:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.824 [2024-11-28 09:37:59.426331] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:20.824 [2024-11-28 09:37:59.426798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:05:20.824 [2024-11-28 09:37:59.589597] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.824 [2024-11-28 09:37:59.589631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.083 [2024-11-28 09:37:59.740758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.016 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.016 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.016 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58966 00:05:22.016 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58966 00:05:22.017 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58966 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58966 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.275 killing process with pid 58966 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58966 00:05:22.275 09:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58966 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58982 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58982 ']' 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58982 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58982 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.802 killing process with pid 58982 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58982' 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58982 00:05:24.802 09:38:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58982 00:05:25.738 00:05:25.738 real 0m6.105s 00:05:25.738 user 0m6.405s 00:05:25.738 sys 0m0.785s 00:05:25.738 09:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.738 ************************************ 00:05:25.738 END TEST non_locking_app_on_locked_coremask 00:05:25.738 ************************************ 00:05:25.738 09:38:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.738 09:38:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:25.738 09:38:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.738 09:38:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.738 09:38:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.738 ************************************ 00:05:25.738 START TEST locking_app_on_unlocked_coremask 00:05:25.738 ************************************ 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59073 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59073 /var/tmp/spdk.sock 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59073 ']' 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.738 09:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 [2024-11-28 09:38:04.665138] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:25.998 [2024-11-28 09:38:04.665240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59073 ] 00:05:25.998 [2024-11-28 09:38:04.827232] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.998 [2024-11-28 09:38:04.827275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.260 [2024-11-28 09:38:04.921757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59089 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59089 /var/tmp/spdk2.sock 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59089 ']' 00:05:26.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.829 09:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.829 [2024-11-28 09:38:05.579312] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:26.829 [2024-11-28 09:38:05.579430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:05:27.090 [2024-11-28 09:38:05.751267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.090 [2024-11-28 09:38:05.942259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.468 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.468 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.468 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59089 00:05:28.468 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59089 00:05:28.468 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59073 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59073 ']' 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59073 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59073 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.726 killing process with pid 59073 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59073' 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59073 00:05:28.726 09:38:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59073 00:05:31.254 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59089 00:05:31.254 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59089 ']' 00:05:31.254 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59089 00:05:31.254 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.254 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.255 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59089 00:05:31.255 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.255 killing process with pid 59089 00:05:31.255 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.255 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59089' 00:05:31.255 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59089 00:05:31.255 09:38:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59089 00:05:32.191 00:05:32.192 real 0m6.404s 00:05:32.192 user 0m6.620s 00:05:32.192 sys 0m0.856s 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.192 ************************************ 00:05:32.192 END TEST locking_app_on_unlocked_coremask 00:05:32.192 ************************************ 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.192 09:38:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:32.192 09:38:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.192 09:38:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.192 09:38:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.192 ************************************ 00:05:32.192 START TEST locking_app_on_locked_coremask 00:05:32.192 ************************************ 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59180 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59180 /var/tmp/spdk.sock 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59180 ']' 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.192 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.504 [2024-11-28 09:38:11.131130] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:32.504 [2024-11-28 09:38:11.131229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59180 ] 00:05:32.504 [2024-11-28 09:38:11.279068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.504 [2024-11-28 09:38:11.363561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59196 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59196 /var/tmp/spdk2.sock 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59196 /var/tmp/spdk2.sock 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59196 /var/tmp/spdk2.sock 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59196 ']' 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.439 09:38:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 [2024-11-28 09:38:12.045847] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:33.439 [2024-11-28 09:38:12.045962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59196 ] 00:05:33.439 [2024-11-28 09:38:12.209560] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59180 has claimed it. 00:05:33.439 [2024-11-28 09:38:12.209600] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.006 ERROR: process (pid: 59196) is no longer running 00:05:34.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59196) - No such process 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59180 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59180 00:05:34.006 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59180 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59180 ']' 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59180 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59180 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.265 killing process with pid 59180 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59180' 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59180 00:05:34.265 09:38:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59180 00:05:35.202 00:05:35.202 real 0m3.012s 00:05:35.202 user 0m3.252s 00:05:35.202 sys 0m0.514s 00:05:35.203 09:38:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.203 ************************************ 00:05:35.203 END TEST locking_app_on_locked_coremask 00:05:35.203 ************************************ 00:05:35.203 09:38:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.462 09:38:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.462 09:38:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.462 09:38:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.462 09:38:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.462 ************************************ 00:05:35.462 START TEST locking_overlapped_coremask 00:05:35.462 ************************************ 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59249 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59249 /var/tmp/spdk.sock 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59249 ']' 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.462 09:38:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.462 [2024-11-28 09:38:14.200417] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:35.462 [2024-11-28 09:38:14.200522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59249 ] 00:05:35.721 [2024-11-28 09:38:14.350648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.721 [2024-11-28 09:38:14.429406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.721 [2024-11-28 09:38:14.429761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.721 [2024-11-28 09:38:14.429763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59267 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59267 /var/tmp/spdk2.sock 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59267 /var/tmp/spdk2.sock 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59267 /var/tmp/spdk2.sock 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59267 ']' 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.286 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.286 [2024-11-28 09:38:15.102218] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:36.286 [2024-11-28 09:38:15.102522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:05:36.543 [2024-11-28 09:38:15.276104] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59249 has claimed it. 00:05:36.543 [2024-11-28 09:38:15.280177] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.107 ERROR: process (pid: 59267) is no longer running 00:05:37.107 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59267) - No such process 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59249 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59249 ']' 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59249 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59249 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.107 killing process with pid 59249 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59249' 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59249 00:05:37.107 09:38:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59249 00:05:38.484 ************************************ 00:05:38.484 END TEST locking_overlapped_coremask 00:05:38.484 ************************************ 00:05:38.484 00:05:38.484 real 0m2.793s 00:05:38.484 user 0m7.676s 00:05:38.484 sys 0m0.396s 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.484 09:38:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.484 09:38:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.484 09:38:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.484 09:38:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.484 ************************************ 00:05:38.484 START TEST locking_overlapped_coremask_via_rpc 00:05:38.484 ************************************ 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59320 00:05:38.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59320 /var/tmp/spdk.sock 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59320 ']' 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.484 09:38:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.484 [2024-11-28 09:38:17.044078] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:38.484 [2024-11-28 09:38:17.044536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59320 ] 00:05:38.484 [2024-11-28 09:38:17.194553] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.484 [2024-11-28 09:38:17.194694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.484 [2024-11-28 09:38:17.272247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.484 [2024-11-28 09:38:17.272845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.484 [2024-11-28 09:38:17.272863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59338 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59338 /var/tmp/spdk2.sock 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.051 09:38:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.309 [2024-11-28 09:38:17.958596] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:39.309 [2024-11-28 09:38:17.958876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:05:39.309 [2024-11-28 09:38:18.122582] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.309 [2024-11-28 09:38:18.122717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.568 [2024-11-28 09:38:18.281574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.568 [2024-11-28 09:38:18.281596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.568 [2024-11-28 09:38:18.281623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.502 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.503 [2024-11-28 09:38:19.216258] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59320 has claimed it. 00:05:40.503 request: 00:05:40.503 { 00:05:40.503 "method": "framework_enable_cpumask_locks", 00:05:40.503 "req_id": 1 00:05:40.503 } 00:05:40.503 Got JSON-RPC error response 00:05:40.503 response: 00:05:40.503 { 00:05:40.503 "code": -32603, 00:05:40.503 "message": "Failed to claim CPU core: 2" 00:05:40.503 } 00:05:40.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59320 /var/tmp/spdk.sock 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59320 ']' 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.503 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59338 /var/tmp/spdk2.sock 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.762 ************************************ 00:05:40.762 END TEST locking_overlapped_coremask_via_rpc 00:05:40.762 ************************************ 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.762 00:05:40.762 real 0m2.659s 00:05:40.762 user 0m1.040s 00:05:40.762 sys 0m0.135s 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.762 09:38:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.021 09:38:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:41.021 09:38:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59320 ]] 00:05:41.021 09:38:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59320 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59320 ']' 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59320 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59320 00:05:41.021 killing process with pid 59320 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59320' 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59320 00:05:41.021 09:38:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59320 00:05:42.397 09:38:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59338 ]] 00:05:42.398 09:38:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59338 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59338 ']' 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59338 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59338 00:05:42.398 killing process with pid 59338 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59338' 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59338 00:05:42.398 09:38:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59338 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59320 ]] 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59320 00:05:43.336 Process with pid 59320 is not found 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59320 ']' 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59320 00:05:43.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59320) - No such process 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59320 is not found' 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59338 ]] 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59338 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59338 ']' 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59338 00:05:43.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59338) - No such process 00:05:43.336 Process with pid 59338 is not found 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59338 is not found' 00:05:43.336 09:38:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.336 ************************************ 00:05:43.336 END TEST cpu_locks 00:05:43.336 ************************************ 00:05:43.336 00:05:43.336 real 0m28.333s 00:05:43.336 user 0m48.397s 00:05:43.336 sys 0m4.210s 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.336 09:38:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 ************************************ 00:05:43.336 END TEST event 00:05:43.336 ************************************ 00:05:43.336 00:05:43.336 real 0m55.456s 00:05:43.336 user 1m42.327s 00:05:43.336 sys 0m7.003s 00:05:43.336 09:38:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.336 09:38:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 09:38:22 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:43.336 09:38:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.336 09:38:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.336 09:38:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 ************************************ 00:05:43.336 START TEST thread 00:05:43.336 ************************************ 00:05:43.336 09:38:22 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:43.596 * Looking for test storage... 00:05:43.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.596 09:38:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.596 09:38:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.596 09:38:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.596 09:38:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.596 09:38:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.596 09:38:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.596 09:38:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.596 09:38:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.596 09:38:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.596 09:38:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.596 09:38:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.596 09:38:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:43.596 09:38:22 thread -- scripts/common.sh@345 -- # : 1 00:05:43.596 09:38:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.596 09:38:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.596 09:38:22 thread -- scripts/common.sh@365 -- # decimal 1 00:05:43.596 09:38:22 thread -- scripts/common.sh@353 -- # local d=1 00:05:43.596 09:38:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.596 09:38:22 thread -- scripts/common.sh@355 -- # echo 1 00:05:43.596 09:38:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.596 09:38:22 thread -- scripts/common.sh@366 -- # decimal 2 00:05:43.596 09:38:22 thread -- scripts/common.sh@353 -- # local d=2 00:05:43.596 09:38:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.596 09:38:22 thread -- scripts/common.sh@355 -- # echo 2 00:05:43.596 09:38:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.596 09:38:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.596 09:38:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.596 09:38:22 thread -- scripts/common.sh@368 -- # return 0 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.596 --rc genhtml_branch_coverage=1 00:05:43.596 --rc genhtml_function_coverage=1 00:05:43.596 --rc genhtml_legend=1 00:05:43.596 --rc geninfo_all_blocks=1 00:05:43.596 --rc geninfo_unexecuted_blocks=1 00:05:43.596 00:05:43.596 ' 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.596 --rc genhtml_branch_coverage=1 00:05:43.596 --rc genhtml_function_coverage=1 00:05:43.596 --rc genhtml_legend=1 00:05:43.596 --rc geninfo_all_blocks=1 00:05:43.596 --rc geninfo_unexecuted_blocks=1 00:05:43.596 00:05:43.596 ' 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.596 --rc genhtml_branch_coverage=1 00:05:43.596 --rc genhtml_function_coverage=1 00:05:43.596 --rc genhtml_legend=1 00:05:43.596 --rc geninfo_all_blocks=1 00:05:43.596 --rc geninfo_unexecuted_blocks=1 00:05:43.596 00:05:43.596 ' 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.596 --rc genhtml_branch_coverage=1 00:05:43.596 --rc genhtml_function_coverage=1 00:05:43.596 --rc genhtml_legend=1 00:05:43.596 --rc geninfo_all_blocks=1 00:05:43.596 --rc geninfo_unexecuted_blocks=1 00:05:43.596 00:05:43.596 ' 00:05:43.596 09:38:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.596 09:38:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.596 ************************************ 00:05:43.596 START TEST thread_poller_perf 00:05:43.596 ************************************ 00:05:43.596 09:38:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.596 [2024-11-28 09:38:22.336595] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:43.596 [2024-11-28 09:38:22.336800] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59487 ] 00:05:43.855 [2024-11-28 09:38:22.494038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.855 [2024-11-28 09:38:22.569548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.855 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.232 [2024-11-28T09:38:24.112Z] ====================================== 00:05:45.232 [2024-11-28T09:38:24.112Z] busy:2609268946 (cyc) 00:05:45.232 [2024-11-28T09:38:24.112Z] total_run_count: 403000 00:05:45.232 [2024-11-28T09:38:24.112Z] tsc_hz: 2600000000 (cyc) 00:05:45.232 [2024-11-28T09:38:24.112Z] ====================================== 00:05:45.232 [2024-11-28T09:38:24.112Z] poller_cost: 6474 (cyc), 2490 (nsec) 00:05:45.232 00:05:45.232 ************************************ 00:05:45.232 END TEST thread_poller_perf 00:05:45.232 ************************************ 00:05:45.232 real 0m1.395s 00:05:45.232 user 0m1.223s 00:05:45.232 sys 0m0.066s 00:05:45.232 09:38:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.232 09:38:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.232 09:38:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.232 09:38:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:45.232 09:38:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.232 09:38:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.232 ************************************ 00:05:45.232 START TEST thread_poller_perf 00:05:45.232 ************************************ 00:05:45.232 09:38:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.232 [2024-11-28 09:38:23.774329] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:45.232 [2024-11-28 09:38:23.774408] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59524 ] 00:05:45.232 [2024-11-28 09:38:23.922102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.232 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:45.232 [2024-11-28 09:38:23.995680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.615 [2024-11-28T09:38:25.495Z] ====================================== 00:05:46.615 [2024-11-28T09:38:25.495Z] busy:2602487960 (cyc) 00:05:46.615 [2024-11-28T09:38:25.495Z] total_run_count: 5226000 00:05:46.615 [2024-11-28T09:38:25.495Z] tsc_hz: 2600000000 (cyc) 00:05:46.615 [2024-11-28T09:38:25.495Z] ====================================== 00:05:46.615 [2024-11-28T09:38:25.495Z] poller_cost: 497 (cyc), 191 (nsec) 00:05:46.615 ************************************ 00:05:46.615 END TEST thread_poller_perf 00:05:46.615 ************************************ 00:05:46.615 00:05:46.615 real 0m1.361s 00:05:46.615 user 0m1.203s 00:05:46.615 sys 0m0.053s 00:05:46.615 09:38:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.615 09:38:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 09:38:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:46.615 00:05:46.615 real 0m2.984s 00:05:46.615 user 0m2.543s 00:05:46.615 sys 0m0.224s 00:05:46.615 ************************************ 00:05:46.615 END TEST thread 00:05:46.615 ************************************ 00:05:46.615 09:38:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.615 09:38:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 09:38:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:46.615 09:38:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:46.615 09:38:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.615 09:38:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.615 09:38:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 ************************************ 00:05:46.615 START TEST app_cmdline 00:05:46.615 ************************************ 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:46.615 * Looking for test storage... 00:05:46.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:46.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.615 09:38:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.615 --rc genhtml_branch_coverage=1 00:05:46.615 --rc genhtml_function_coverage=1 00:05:46.615 --rc genhtml_legend=1 00:05:46.615 --rc geninfo_all_blocks=1 00:05:46.615 --rc geninfo_unexecuted_blocks=1 00:05:46.615 00:05:46.615 ' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.615 --rc genhtml_branch_coverage=1 00:05:46.615 --rc genhtml_function_coverage=1 00:05:46.615 --rc genhtml_legend=1 00:05:46.615 --rc geninfo_all_blocks=1 00:05:46.615 --rc geninfo_unexecuted_blocks=1 00:05:46.615 00:05:46.615 ' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.615 --rc genhtml_branch_coverage=1 00:05:46.615 --rc genhtml_function_coverage=1 00:05:46.615 --rc genhtml_legend=1 00:05:46.615 --rc geninfo_all_blocks=1 00:05:46.615 --rc geninfo_unexecuted_blocks=1 00:05:46.615 00:05:46.615 ' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.615 --rc genhtml_branch_coverage=1 00:05:46.615 --rc genhtml_function_coverage=1 00:05:46.615 --rc genhtml_legend=1 00:05:46.615 --rc geninfo_all_blocks=1 00:05:46.615 --rc geninfo_unexecuted_blocks=1 00:05:46.615 00:05:46.615 ' 00:05:46.615 09:38:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:46.615 09:38:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59607 00:05:46.615 09:38:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59607 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59607 ']' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.615 09:38:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.615 09:38:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:46.615 [2024-11-28 09:38:25.386336] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:46.615 [2024-11-28 09:38:25.386453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59607 ] 00:05:46.876 [2024-11-28 09:38:25.546898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.876 [2024-11-28 09:38:25.643232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.446 09:38:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.446 09:38:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:47.446 09:38:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:47.707 { 00:05:47.707 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:05:47.707 "fields": { 00:05:47.707 "major": 25, 00:05:47.707 "minor": 1, 00:05:47.707 "patch": 0, 00:05:47.707 "suffix": "-pre", 00:05:47.707 "commit": "35cd3e84d" 00:05:47.707 } 00:05:47.707 } 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:47.707 09:38:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:47.707 09:38:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.968 request: 00:05:47.968 { 00:05:47.968 "method": "env_dpdk_get_mem_stats", 00:05:47.968 "req_id": 1 00:05:47.968 } 00:05:47.968 Got JSON-RPC error response 00:05:47.968 response: 00:05:47.968 { 00:05:47.968 "code": -32601, 00:05:47.968 "message": "Method not found" 00:05:47.968 } 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.968 09:38:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59607 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59607 ']' 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59607 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59607 00:05:47.968 killing process with pid 59607 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59607' 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 59607 00:05:47.968 09:38:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 59607 00:05:49.348 ************************************ 00:05:49.348 END TEST app_cmdline 00:05:49.348 ************************************ 00:05:49.348 00:05:49.348 real 0m2.869s 00:05:49.348 user 0m3.124s 00:05:49.348 sys 0m0.422s 00:05:49.348 09:38:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.348 09:38:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.348 09:38:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:49.348 09:38:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.348 09:38:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.348 09:38:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.348 ************************************ 00:05:49.348 START TEST version 00:05:49.348 ************************************ 00:05:49.348 09:38:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:49.348 * Looking for test storage... 00:05:49.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:49.348 09:38:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.348 09:38:28 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.348 09:38:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.348 09:38:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.348 09:38:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.348 09:38:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.348 09:38:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.348 09:38:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.348 09:38:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.348 09:38:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.348 09:38:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.348 09:38:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.348 09:38:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.348 09:38:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.348 09:38:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.348 09:38:28 version -- scripts/common.sh@344 -- # case "$op" in 00:05:49.348 09:38:28 version -- scripts/common.sh@345 -- # : 1 00:05:49.348 09:38:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.348 09:38:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.348 09:38:28 version -- scripts/common.sh@365 -- # decimal 1 00:05:49.348 09:38:28 version -- scripts/common.sh@353 -- # local d=1 00:05:49.348 09:38:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.348 09:38:28 version -- scripts/common.sh@355 -- # echo 1 00:05:49.348 09:38:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.607 09:38:28 version -- scripts/common.sh@366 -- # decimal 2 00:05:49.607 09:38:28 version -- scripts/common.sh@353 -- # local d=2 00:05:49.607 09:38:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.607 09:38:28 version -- scripts/common.sh@355 -- # echo 2 00:05:49.607 09:38:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.607 09:38:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.607 09:38:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.607 09:38:28 version -- scripts/common.sh@368 -- # return 0 00:05:49.607 09:38:28 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.607 09:38:28 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.607 --rc genhtml_branch_coverage=1 00:05:49.607 --rc genhtml_function_coverage=1 00:05:49.607 --rc genhtml_legend=1 00:05:49.607 --rc geninfo_all_blocks=1 00:05:49.607 --rc geninfo_unexecuted_blocks=1 00:05:49.607 00:05:49.607 ' 00:05:49.607 09:38:28 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.607 --rc genhtml_branch_coverage=1 00:05:49.607 --rc genhtml_function_coverage=1 00:05:49.607 --rc genhtml_legend=1 00:05:49.607 --rc geninfo_all_blocks=1 00:05:49.607 --rc geninfo_unexecuted_blocks=1 00:05:49.607 00:05:49.607 ' 00:05:49.607 09:38:28 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.607 --rc genhtml_branch_coverage=1 00:05:49.607 --rc genhtml_function_coverage=1 00:05:49.607 --rc genhtml_legend=1 00:05:49.607 --rc geninfo_all_blocks=1 00:05:49.607 --rc geninfo_unexecuted_blocks=1 00:05:49.607 00:05:49.607 ' 00:05:49.607 09:38:28 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.607 --rc genhtml_branch_coverage=1 00:05:49.607 --rc genhtml_function_coverage=1 00:05:49.607 --rc genhtml_legend=1 00:05:49.607 --rc geninfo_all_blocks=1 00:05:49.607 --rc geninfo_unexecuted_blocks=1 00:05:49.607 00:05:49.607 ' 00:05:49.607 09:38:28 version -- app/version.sh@17 -- # get_header_version major 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.607 09:38:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # cut -f2 00:05:49.607 09:38:28 version -- app/version.sh@17 -- # major=25 00:05:49.607 09:38:28 version -- app/version.sh@18 -- # get_header_version minor 00:05:49.607 09:38:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # cut -f2 00:05:49.607 09:38:28 version -- app/version.sh@18 -- # minor=1 00:05:49.607 09:38:28 version -- app/version.sh@19 -- # get_header_version patch 00:05:49.607 09:38:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # cut -f2 00:05:49.607 09:38:28 version -- app/version.sh@19 -- # patch=0 00:05:49.607 09:38:28 version -- app/version.sh@20 -- # get_header_version suffix 00:05:49.607 09:38:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # cut -f2 00:05:49.607 09:38:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.607 09:38:28 version -- app/version.sh@20 -- # suffix=-pre 00:05:49.607 09:38:28 version -- app/version.sh@22 -- # version=25.1 00:05:49.607 09:38:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:49.607 09:38:28 version -- app/version.sh@28 -- # version=25.1rc0 00:05:49.607 09:38:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:49.607 09:38:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:49.607 09:38:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:49.607 09:38:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:49.607 ************************************ 00:05:49.607 END TEST version 00:05:49.607 ************************************ 00:05:49.607 00:05:49.607 real 0m0.194s 00:05:49.607 user 0m0.112s 00:05:49.607 sys 0m0.106s 00:05:49.607 09:38:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.607 09:38:28 version -- common/autotest_common.sh@10 -- # set +x 00:05:49.607 09:38:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:49.607 09:38:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:49.607 09:38:28 -- spdk/autotest.sh@194 -- # uname -s 00:05:49.607 09:38:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:49.607 09:38:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.607 09:38:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.607 09:38:28 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:49.607 09:38:28 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:49.607 09:38:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:49.607 09:38:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.607 09:38:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.607 ************************************ 00:05:49.607 START TEST blockdev_nvme 00:05:49.607 ************************************ 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:49.607 * Looking for test storage... 00:05:49.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.607 09:38:28 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.607 09:38:28 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.607 --rc genhtml_branch_coverage=1 00:05:49.607 --rc genhtml_function_coverage=1 00:05:49.608 --rc genhtml_legend=1 00:05:49.608 --rc geninfo_all_blocks=1 00:05:49.608 --rc geninfo_unexecuted_blocks=1 00:05:49.608 00:05:49.608 ' 00:05:49.608 09:38:28 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.608 --rc genhtml_branch_coverage=1 00:05:49.608 --rc genhtml_function_coverage=1 00:05:49.608 --rc genhtml_legend=1 00:05:49.608 --rc geninfo_all_blocks=1 00:05:49.608 --rc geninfo_unexecuted_blocks=1 00:05:49.608 00:05:49.608 ' 00:05:49.608 09:38:28 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.608 --rc genhtml_branch_coverage=1 00:05:49.608 --rc genhtml_function_coverage=1 00:05:49.608 --rc genhtml_legend=1 00:05:49.608 --rc geninfo_all_blocks=1 00:05:49.608 --rc geninfo_unexecuted_blocks=1 00:05:49.608 00:05:49.608 ' 00:05:49.608 09:38:28 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.608 --rc genhtml_branch_coverage=1 00:05:49.608 --rc genhtml_function_coverage=1 00:05:49.608 --rc genhtml_legend=1 00:05:49.608 --rc geninfo_all_blocks=1 00:05:49.608 --rc geninfo_unexecuted_blocks=1 00:05:49.608 00:05:49.608 ' 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:49.608 09:38:28 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:49.608 09:38:28 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59779 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59779 00:05:49.867 09:38:28 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:49.867 09:38:28 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59779 ']' 00:05:49.867 09:38:28 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.867 09:38:28 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.867 09:38:28 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.867 09:38:28 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.867 09:38:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.867 [2024-11-28 09:38:28.569967] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:49.867 [2024-11-28 09:38:28.570272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:05:49.867 [2024-11-28 09:38:28.723531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.126 [2024-11-28 09:38:28.816908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.696 09:38:29 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.696 09:38:29 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:50.696 09:38:29 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:50.696 09:38:29 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:50.696 09:38:29 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:50.696 09:38:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:50.696 09:38:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:50.696 09:38:29 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:50.696 09:38:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.696 09:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.955 09:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.955 09:38:29 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:51.215 09:38:29 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.215 09:38:29 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:51.215 09:38:29 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:51.216 09:38:29 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "039b0283-4551-4f0f-a41b-87ababd8ff08"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "039b0283-4551-4f0f-a41b-87ababd8ff08",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f09fc9df-c746-4670-b893-5cc20c0ff56f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f09fc9df-c746-4670-b893-5cc20c0ff56f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1fa84732-2f0a-4e54-abb0-658305a3d149"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1fa84732-2f0a-4e54-abb0-658305a3d149",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4da2e9b1-6b86-4bbf-a53c-35c99d8778a3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4da2e9b1-6b86-4bbf-a53c-35c99d8778a3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4d2c21ea-48bd-454f-a9ed-03ba94c84115"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4d2c21ea-48bd-454f-a9ed-03ba94c84115",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "728b6de7-84fd-4583-b777-7d110e2a93aa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "728b6de7-84fd-4583-b777-7d110e2a93aa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:51.216 09:38:29 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:51.216 09:38:29 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:51.216 09:38:29 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:51.216 09:38:29 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59779 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59779 ']' 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59779 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59779 00:05:51.216 killing process with pid 59779 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59779' 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59779 00:05:51.216 09:38:29 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59779 00:05:52.643 09:38:31 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:52.643 09:38:31 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:52.643 09:38:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:52.643 09:38:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.643 09:38:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.643 ************************************ 00:05:52.643 START TEST bdev_hello_world 00:05:52.643 ************************************ 00:05:52.643 09:38:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:52.643 [2024-11-28 09:38:31.451333] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:52.643 [2024-11-28 09:38:31.451447] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59863 ] 00:05:52.900 [2024-11-28 09:38:31.606322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.900 [2024-11-28 09:38:31.680106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.464 [2024-11-28 09:38:32.170239] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:53.464 [2024-11-28 09:38:32.170276] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:53.464 [2024-11-28 09:38:32.170290] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:53.464 [2024-11-28 09:38:32.172183] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:53.464 [2024-11-28 09:38:32.172575] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:53.464 [2024-11-28 09:38:32.172598] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:53.464 [2024-11-28 09:38:32.172728] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:53.464 00:05:53.464 [2024-11-28 09:38:32.172740] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:54.029 00:05:54.029 real 0m1.330s 00:05:54.029 user 0m1.063s 00:05:54.029 sys 0m0.163s 00:05:54.029 09:38:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.029 09:38:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:54.029 ************************************ 00:05:54.029 END TEST bdev_hello_world 00:05:54.029 ************************************ 00:05:54.029 09:38:32 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:54.029 09:38:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.029 09:38:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.029 09:38:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.029 ************************************ 00:05:54.029 START TEST bdev_bounds 00:05:54.029 ************************************ 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59900 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.029 Process bdevio pid: 59900 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59900' 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59900 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:54.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59900 ']' 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.029 09:38:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:54.029 [2024-11-28 09:38:32.843044] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:54.029 [2024-11-28 09:38:32.843175] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59900 ] 00:05:54.288 [2024-11-28 09:38:32.995404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.288 [2024-11-28 09:38:33.071084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.288 [2024-11-28 09:38:33.071243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.288 [2024-11-28 09:38:33.071279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.854 09:38:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.854 09:38:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:54.854 09:38:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:54.854 I/O targets: 00:05:54.854 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:54.854 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:54.854 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:54.854 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:54.854 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:54.854 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:54.854 00:05:54.854 00:05:54.854 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.854 http://cunit.sourceforge.net/ 00:05:54.854 00:05:54.854 00:05:54.854 Suite: bdevio tests on: Nvme3n1 00:05:54.854 Test: blockdev write read block ...passed 00:05:55.113 Test: blockdev write zeroes read block ...passed 00:05:55.113 Test: blockdev write zeroes read no split ...passed 00:05:55.113 Test: blockdev write zeroes read split ...passed 00:05:55.113 Test: blockdev write zeroes read split partial ...passed 00:05:55.113 Test: blockdev reset ...[2024-11-28 09:38:33.774831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:55.113 [2024-11-28 09:38:33.777700] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:55.113 passed 00:05:55.113 Test: blockdev write read 8 blocks ...passed 00:05:55.113 Test: blockdev write read size > 128k ...passed 00:05:55.113 Test: blockdev write read invalid size ...passed 00:05:55.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.113 Test: blockdev write read max offset ...passed 00:05:55.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.113 Test: blockdev writev readv 8 blocks ...passed 00:05:55.113 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.113 Test: blockdev writev readv block ...passed 00:05:55.113 Test: blockdev writev readv size > 128k ...passed 00:05:55.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.113 Test: blockdev comparev and writev ...[2024-11-28 09:38:33.783715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bce0a000 len:0x1000 00:05:55.113 [2024-11-28 09:38:33.783842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.113 passed 00:05:55.113 Test: blockdev nvme passthru rw ...passed 00:05:55.113 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:38:33.784413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.113 [2024-11-28 09:38:33.784506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.113 passed 00:05:55.113 Test: blockdev nvme admin passthru ...passed 00:05:55.113 Test: blockdev copy ...passed 00:05:55.113 Suite: bdevio tests on: Nvme2n3 00:05:55.113 Test: blockdev write read block ...passed 00:05:55.113 Test: blockdev write zeroes read block ...passed 00:05:55.113 Test: blockdev write zeroes read no split ...passed 00:05:55.113 Test: blockdev write zeroes read split ...passed 00:05:55.113 Test: blockdev write zeroes read split partial ...passed 00:05:55.113 Test: blockdev reset ...[2024-11-28 09:38:33.828829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:55.113 [2024-11-28 09:38:33.831764] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:55.113 passed 00:05:55.113 Test: blockdev write read 8 blocks ...passed 00:05:55.113 Test: blockdev write read size > 128k ...passed 00:05:55.113 Test: blockdev write read invalid size ...passed 00:05:55.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.113 Test: blockdev write read max offset ...passed 00:05:55.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.113 Test: blockdev writev readv 8 blocks ...passed 00:05:55.113 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.113 Test: blockdev writev readv block ...passed 00:05:55.113 Test: blockdev writev readv size > 128k ...passed 00:05:55.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.113 Test: blockdev comparev and writev ...[2024-11-28 09:38:33.837532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29f806000 len:0x1000 00:05:55.113 [2024-11-28 09:38:33.837629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.113 passed 00:05:55.113 Test: blockdev nvme passthru rw ...passed 00:05:55.113 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:38:33.838210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.113 [2024-11-28 09:38:33.838280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.113 passed 00:05:55.113 Test: blockdev nvme admin passthru ...passed 00:05:55.113 Test: blockdev copy ...passed 00:05:55.113 Suite: bdevio tests on: Nvme2n2 00:05:55.113 Test: blockdev write read block ...passed 00:05:55.113 Test: blockdev write zeroes read block ...passed 00:05:55.113 Test: blockdev write zeroes read no split ...passed 00:05:55.113 Test: blockdev write zeroes read split ...passed 00:05:55.113 Test: blockdev write zeroes read split partial ...passed 00:05:55.113 Test: blockdev reset ...[2024-11-28 09:38:33.879549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:55.113 [2024-11-28 09:38:33.882275] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:55.113 passed 00:05:55.113 Test: blockdev write read 8 blocks ...passed 00:05:55.113 Test: blockdev write read size > 128k ...passed 00:05:55.113 Test: blockdev write read invalid size ...passed 00:05:55.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.113 Test: blockdev write read max offset ...passed 00:05:55.113 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.113 Test: blockdev writev readv 8 blocks ...passed 00:05:55.113 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.113 Test: blockdev writev readv block ...passed 00:05:55.113 Test: blockdev writev readv size > 128k ...passed 00:05:55.113 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.113 Test: blockdev comparev and writev ...[2024-11-28 09:38:33.887935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a3c000 len:0x1000 00:05:55.113 [2024-11-28 09:38:33.888027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.113 passed 00:05:55.113 Test: blockdev nvme passthru rw ...passed 00:05:55.113 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:38:33.888592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.113 passed 00:05:55.113 Test: blockdev nvme admin passthru ...[2024-11-28 09:38:33.888658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.113 passed 00:05:55.113 Test: blockdev copy ...passed 00:05:55.113 Suite: bdevio tests on: Nvme2n1 00:05:55.113 Test: blockdev write read block ...passed 00:05:55.113 Test: blockdev write zeroes read block ...passed 00:05:55.113 Test: blockdev write zeroes read no split ...passed 00:05:55.113 Test: blockdev write zeroes read split ...passed 00:05:55.113 Test: blockdev write zeroes read split partial ...passed 00:05:55.113 Test: blockdev reset ...[2024-11-28 09:38:33.929706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:55.113 [2024-11-28 09:38:33.932201] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:55.113 passed 00:05:55.113 Test: blockdev write read 8 blocks ...passed 00:05:55.113 Test: blockdev write read size > 128k ...passed 00:05:55.113 Test: blockdev write read invalid size ...passed 00:05:55.113 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.113 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.114 Test: blockdev write read max offset ...passed 00:05:55.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.114 Test: blockdev writev readv 8 blocks ...passed 00:05:55.114 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.114 Test: blockdev writev readv block ...passed 00:05:55.114 Test: blockdev writev readv size > 128k ...passed 00:05:55.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.114 Test: blockdev comparev and writev ...[2024-11-28 09:38:33.937826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a38000 len:0x1000 00:05:55.114 [2024-11-28 09:38:33.937918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.114 passed 00:05:55.114 Test: blockdev nvme passthru rw ...passed 00:05:55.114 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:38:33.938401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.114 [2024-11-28 09:38:33.938467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.114 passed 00:05:55.114 Test: blockdev nvme admin passthru ...passed 00:05:55.114 Test: blockdev copy ...passed 00:05:55.114 Suite: bdevio tests on: Nvme1n1 00:05:55.114 Test: blockdev write read block ...passed 00:05:55.114 Test: blockdev write zeroes read block ...passed 00:05:55.114 Test: blockdev write zeroes read no split ...passed 00:05:55.114 Test: blockdev write zeroes read split ...passed 00:05:55.114 Test: blockdev write zeroes read split partial ...passed 00:05:55.114 Test: blockdev reset ...[2024-11-28 09:38:33.981044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:55.114 [2024-11-28 09:38:33.983368] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:55.114 passed 00:05:55.114 Test: blockdev write read 8 blocks ...passed 00:05:55.114 Test: blockdev write read size > 128k ...passed 00:05:55.114 Test: blockdev write read invalid size ...passed 00:05:55.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.114 Test: blockdev write read max offset ...passed 00:05:55.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.114 Test: blockdev writev readv 8 blocks ...passed 00:05:55.114 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.114 Test: blockdev writev readv block ...passed 00:05:55.114 Test: blockdev writev readv size > 128k ...passed 00:05:55.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.114 Test: blockdev comparev and writev ...[2024-11-28 09:38:33.989514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7a34000 len:0x1000 00:05:55.114 [2024-11-28 09:38:33.989607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:55.114 passed 00:05:55.114 Test: blockdev nvme passthru rw ...passed 00:05:55.114 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:38:33.990142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:55.114 [2024-11-28 09:38:33.990225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:55.114 passed 00:05:55.114 Test: blockdev nvme admin passthru ...passed 00:05:55.114 Test: blockdev copy ...passed 00:05:55.114 Suite: bdevio tests on: Nvme0n1 00:05:55.114 Test: blockdev write read block ...passed 00:05:55.114 Test: blockdev write zeroes read block ...passed 00:05:55.372 Test: blockdev write zeroes read no split ...passed 00:05:55.372 Test: blockdev write zeroes read split ...passed 00:05:55.372 Test: blockdev write zeroes read split partial ...passed 00:05:55.372 Test: blockdev reset ...[2024-11-28 09:38:34.032549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:55.372 [2024-11-28 09:38:34.034927] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:55.372 passed 00:05:55.372 Test: blockdev write read 8 blocks ...passed 00:05:55.372 Test: blockdev write read size > 128k ...passed 00:05:55.372 Test: blockdev write read invalid size ...passed 00:05:55.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:55.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:55.372 Test: blockdev write read max offset ...passed 00:05:55.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:55.372 Test: blockdev writev readv 8 blocks ...passed 00:05:55.372 Test: blockdev writev readv 30 x 1block ...passed 00:05:55.372 Test: blockdev writev readv block ...passed 00:05:55.372 Test: blockdev writev readv size > 128k ...passed 00:05:55.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:55.372 Test: blockdev comparev and writev ...passed 00:05:55.372 Test: blockdev nvme passthru rw ...[2024-11-28 09:38:34.043510] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:55.372 separate metadata which is not supported yet. 00:05:55.372 passed 00:05:55.372 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:38:34.043984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:55.372 [2024-11-28 09:38:34.044058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:55.372 passed 00:05:55.372 Test: blockdev nvme admin passthru ...passed 00:05:55.372 Test: blockdev copy ...passed 00:05:55.372 00:05:55.372 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.372 suites 6 6 n/a 0 0 00:05:55.372 tests 138 138 138 0 0 00:05:55.372 asserts 893 893 893 0 n/a 00:05:55.372 00:05:55.372 Elapsed time = 0.869 seconds 00:05:55.372 0 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59900 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59900 ']' 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59900 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59900 00:05:55.372 killing process with pid 59900 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59900' 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59900 00:05:55.372 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59900 00:05:55.940 ************************************ 00:05:55.940 END TEST bdev_bounds 00:05:55.940 ************************************ 00:05:55.940 09:38:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:55.940 00:05:55.940 real 0m1.844s 00:05:55.940 user 0m4.722s 00:05:55.940 sys 0m0.271s 00:05:55.940 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.940 09:38:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:55.940 09:38:34 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:55.940 09:38:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:55.940 09:38:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.940 09:38:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.940 ************************************ 00:05:55.940 START TEST bdev_nbd 00:05:55.940 ************************************ 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59948 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59948 /var/tmp/spdk-nbd.sock 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59948 ']' 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.940 09:38:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:55.940 [2024-11-28 09:38:34.743577] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:55.940 [2024-11-28 09:38:34.743664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.199 [2024-11-28 09:38:34.894412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.199 [2024-11-28 09:38:34.969309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.777 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.038 1+0 records in 00:05:57.038 1+0 records out 00:05:57.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000915603 s, 4.5 MB/s 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.038 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.039 09:38:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.300 1+0 records in 00:05:57.300 1+0 records out 00:05:57.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115589 s, 3.5 MB/s 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.300 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:57.560 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:57.560 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.561 1+0 records in 00:05:57.561 1+0 records out 00:05:57.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010801 s, 3.8 MB/s 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.561 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.822 1+0 records in 00:05:57.822 1+0 records out 00:05:57.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106287 s, 3.9 MB/s 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.822 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:58.083 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:58.083 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:58.083 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:58.083 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:58.083 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.083 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.084 1+0 records in 00:05:58.084 1+0 records out 00:05:58.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00252495 s, 1.6 MB/s 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.084 09:38:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.345 1+0 records in 00:05:58.345 1+0 records out 00:05:58.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935859 s, 4.4 MB/s 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.345 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd0", 00:05:58.606 "bdev_name": "Nvme0n1" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd1", 00:05:58.606 "bdev_name": "Nvme1n1" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd2", 00:05:58.606 "bdev_name": "Nvme2n1" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd3", 00:05:58.606 "bdev_name": "Nvme2n2" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd4", 00:05:58.606 "bdev_name": "Nvme2n3" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd5", 00:05:58.606 "bdev_name": "Nvme3n1" 00:05:58.606 } 00:05:58.606 ]' 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd0", 00:05:58.606 "bdev_name": "Nvme0n1" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd1", 00:05:58.606 "bdev_name": "Nvme1n1" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd2", 00:05:58.606 "bdev_name": "Nvme2n1" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd3", 00:05:58.606 "bdev_name": "Nvme2n2" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd4", 00:05:58.606 "bdev_name": "Nvme2n3" 00:05:58.606 }, 00:05:58.606 { 00:05:58.606 "nbd_device": "/dev/nbd5", 00:05:58.606 "bdev_name": "Nvme3n1" 00:05:58.606 } 00:05:58.606 ]' 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.606 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.866 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.127 09:38:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.389 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.650 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.911 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.170 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.170 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.170 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.171 09:38:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:00.430 /dev/nbd0 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.430 1+0 records in 00:06:00.430 1+0 records out 00:06:00.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439508 s, 9.3 MB/s 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.430 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:00.689 /dev/nbd1 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.689 1+0 records in 00:06:00.689 1+0 records out 00:06:00.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288106 s, 14.2 MB/s 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:00.689 /dev/nbd10 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.689 1+0 records in 00:06:00.689 1+0 records out 00:06:00.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488604 s, 8.4 MB/s 00:06:00.689 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:00.947 /dev/nbd11 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.947 1+0 records in 00:06:00.947 1+0 records out 00:06:00.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882483 s, 4.6 MB/s 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.947 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:01.205 /dev/nbd12 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.205 1+0 records in 00:06:01.205 1+0 records out 00:06:01.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725667 s, 5.6 MB/s 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.205 09:38:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:01.465 /dev/nbd13 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.465 1+0 records in 00:06:01.465 1+0 records out 00:06:01.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001179 s, 3.5 MB/s 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.465 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd0", 00:06:01.726 "bdev_name": "Nvme0n1" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd1", 00:06:01.726 "bdev_name": "Nvme1n1" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd10", 00:06:01.726 "bdev_name": "Nvme2n1" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd11", 00:06:01.726 "bdev_name": "Nvme2n2" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd12", 00:06:01.726 "bdev_name": "Nvme2n3" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd13", 00:06:01.726 "bdev_name": "Nvme3n1" 00:06:01.726 } 00:06:01.726 ]' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd0", 00:06:01.726 "bdev_name": "Nvme0n1" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd1", 00:06:01.726 "bdev_name": "Nvme1n1" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd10", 00:06:01.726 "bdev_name": "Nvme2n1" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd11", 00:06:01.726 "bdev_name": "Nvme2n2" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd12", 00:06:01.726 "bdev_name": "Nvme2n3" 00:06:01.726 }, 00:06:01.726 { 00:06:01.726 "nbd_device": "/dev/nbd13", 00:06:01.726 "bdev_name": "Nvme3n1" 00:06:01.726 } 00:06:01.726 ]' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.726 /dev/nbd1 00:06:01.726 /dev/nbd10 00:06:01.726 /dev/nbd11 00:06:01.726 /dev/nbd12 00:06:01.726 /dev/nbd13' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.726 /dev/nbd1 00:06:01.726 /dev/nbd10 00:06:01.726 /dev/nbd11 00:06:01.726 /dev/nbd12 00:06:01.726 /dev/nbd13' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:01.726 256+0 records in 00:06:01.726 256+0 records out 00:06:01.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472119 s, 222 MB/s 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.726 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.987 256+0 records in 00:06:01.987 256+0 records out 00:06:01.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177743 s, 5.9 MB/s 00:06:01.987 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.987 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.987 256+0 records in 00:06:01.987 256+0 records out 00:06:01.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224247 s, 4.7 MB/s 00:06:01.987 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.987 09:38:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:02.247 256+0 records in 00:06:02.247 256+0 records out 00:06:02.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194892 s, 5.4 MB/s 00:06:02.247 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.247 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:02.506 256+0 records in 00:06:02.506 256+0 records out 00:06:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180839 s, 5.8 MB/s 00:06:02.506 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.506 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:02.506 256+0 records in 00:06:02.506 256+0 records out 00:06:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142164 s, 7.4 MB/s 00:06:02.506 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.506 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:02.764 256+0 records in 00:06:02.764 256+0 records out 00:06:02.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0500144 s, 21.0 MB/s 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.764 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.022 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.023 09:38:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:03.280 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:03.280 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:03.280 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:03.280 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.281 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.281 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:03.281 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.281 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.281 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.281 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.539 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.797 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:04.061 09:38:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:04.332 malloc_lvol_verify 00:06:04.332 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:04.590 43ed3717-76c3-4770-925d-c5adf8e282f2 00:06:04.590 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:04.590 7f2f33b9-27b7-4290-9507-7bece0e056aa 00:06:04.590 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:04.848 /dev/nbd0 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:04.848 mke2fs 1.47.0 (5-Feb-2023) 00:06:04.848 Discarding device blocks: 0/4096 done 00:06:04.848 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:04.848 00:06:04.848 Allocating group tables: 0/1 done 00:06:04.848 Writing inode tables: 0/1 done 00:06:04.848 Creating journal (1024 blocks): done 00:06:04.848 Writing superblocks and filesystem accounting information: 0/1 done 00:06:04.848 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.848 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59948 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59948 ']' 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59948 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59948 00:06:05.106 killing process with pid 59948 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59948' 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59948 00:06:05.106 09:38:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59948 00:06:05.673 ************************************ 00:06:05.673 END TEST bdev_nbd 00:06:05.673 ************************************ 00:06:05.673 09:38:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:05.673 00:06:05.673 real 0m9.725s 00:06:05.673 user 0m13.544s 00:06:05.673 sys 0m3.181s 00:06:05.673 09:38:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.673 09:38:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:05.673 09:38:44 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:05.673 09:38:44 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:05.673 skipping fio tests on NVMe due to multi-ns failures. 00:06:05.673 09:38:44 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:05.673 09:38:44 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:05.673 09:38:44 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:05.673 09:38:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:05.673 09:38:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.673 09:38:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.673 ************************************ 00:06:05.673 START TEST bdev_verify 00:06:05.673 ************************************ 00:06:05.673 09:38:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:05.673 [2024-11-28 09:38:44.535757] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:05.673 [2024-11-28 09:38:44.535888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60322 ] 00:06:05.931 [2024-11-28 09:38:44.694950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.931 [2024-11-28 09:38:44.771689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.931 [2024-11-28 09:38:44.771782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.494 Running I/O for 5 seconds... 00:06:08.805 23424.00 IOPS, 91.50 MiB/s [2024-11-28T09:38:48.625Z] 22848.00 IOPS, 89.25 MiB/s [2024-11-28T09:38:49.566Z] 21717.33 IOPS, 84.83 MiB/s [2024-11-28T09:38:50.508Z] 21072.00 IOPS, 82.31 MiB/s [2024-11-28T09:38:50.508Z] 20761.60 IOPS, 81.10 MiB/s 00:06:11.628 Latency(us) 00:06:11.628 [2024-11-28T09:38:50.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.628 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x0 length 0xbd0bd 00:06:11.628 Nvme0n1 : 5.06 1734.03 6.77 0.00 0.00 73587.43 7057.72 68560.74 00:06:11.628 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:11.628 Nvme0n1 : 5.07 1679.23 6.56 0.00 0.00 75440.69 7763.50 64527.75 00:06:11.628 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x0 length 0xa0000 00:06:11.628 Nvme1n1 : 5.06 1733.21 6.77 0.00 0.00 73519.56 7813.91 66947.54 00:06:11.628 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0xa0000 length 0xa0000 00:06:11.628 Nvme1n1 : 5.08 1688.32 6.60 0.00 0.00 75077.36 7259.37 63721.16 00:06:11.628 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x0 length 0x80000 00:06:11.628 Nvme2n1 : 5.06 1732.45 6.77 0.00 0.00 73461.42 9124.63 65737.65 00:06:11.628 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x80000 length 0x80000 00:06:11.628 Nvme2n1 : 5.08 1687.88 6.59 0.00 0.00 75008.82 7561.85 64124.46 00:06:11.628 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x0 length 0x80000 00:06:11.628 Nvme2n2 : 5.06 1731.78 6.76 0.00 0.00 73366.24 9981.64 62511.26 00:06:11.628 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x80000 length 0x80000 00:06:11.628 Nvme2n2 : 5.04 1674.74 6.54 0.00 0.00 76137.94 13510.50 65737.65 00:06:11.628 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x0 length 0x80000 00:06:11.628 Nvme2n3 : 5.07 1730.91 6.76 0.00 0.00 73273.68 11191.53 62914.56 00:06:11.628 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x80000 length 0x80000 00:06:11.628 Nvme2n3 : 5.06 1680.70 6.57 0.00 0.00 75700.38 5747.00 65737.65 00:06:11.628 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x0 length 0x20000 00:06:11.628 Nvme3n1 : 5.08 1740.27 6.80 0.00 0.00 72901.88 5444.53 65737.65 00:06:11.628 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.628 Verification LBA range: start 0x20000 length 0x20000 00:06:11.628 Nvme3n1 : 5.07 1680.25 6.56 0.00 0.00 75565.06 5948.65 61301.37 00:06:11.628 [2024-11-28T09:38:50.508Z] =================================================================================================================== 00:06:11.628 [2024-11-28T09:38:50.508Z] Total : 20493.76 80.05 0.00 0.00 74403.00 5444.53 68560.74 00:06:13.011 00:06:13.011 real 0m7.117s 00:06:13.011 user 0m13.337s 00:06:13.011 sys 0m0.203s 00:06:13.011 09:38:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.011 ************************************ 00:06:13.011 09:38:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:13.011 END TEST bdev_verify 00:06:13.011 ************************************ 00:06:13.011 09:38:51 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:13.011 09:38:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:13.011 09:38:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.011 09:38:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.011 ************************************ 00:06:13.011 START TEST bdev_verify_big_io 00:06:13.011 ************************************ 00:06:13.011 09:38:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:13.011 [2024-11-28 09:38:51.717632] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:13.011 [2024-11-28 09:38:51.717747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60420 ] 00:06:13.011 [2024-11-28 09:38:51.876367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.272 [2024-11-28 09:38:51.972618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.272 [2024-11-28 09:38:51.972695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.844 Running I/O for 5 seconds... 00:06:19.049 1092.00 IOPS, 68.25 MiB/s [2024-11-28T09:38:58.861Z] 2139.50 IOPS, 133.72 MiB/s [2024-11-28T09:38:58.861Z] 2671.67 IOPS, 166.98 MiB/s 00:06:19.981 Latency(us) 00:06:19.981 [2024-11-28T09:38:58.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.981 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x0 length 0xbd0b 00:06:19.981 Nvme0n1 : 5.68 123.84 7.74 0.00 0.00 996626.33 15426.17 1077613.49 00:06:19.981 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:19.981 Nvme0n1 : 5.74 111.44 6.97 0.00 0.00 1090297.50 12804.73 1226027.32 00:06:19.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x0 length 0xa000 00:06:19.981 Nvme1n1 : 5.69 123.80 7.74 0.00 0.00 966714.15 76626.71 942105.21 00:06:19.981 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0xa000 length 0xa000 00:06:19.981 Nvme1n1 : 5.92 112.83 7.05 0.00 0.00 1047329.88 114536.76 1232480.10 00:06:19.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x0 length 0x8000 00:06:19.981 Nvme2n1 : 5.82 127.74 7.98 0.00 0.00 906466.41 52025.50 967916.31 00:06:19.981 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x8000 length 0x8000 00:06:19.981 Nvme2n1 : 5.94 111.47 6.97 0.00 0.00 1030610.33 57268.38 1819682.66 00:06:19.981 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x0 length 0x8000 00:06:19.981 Nvme2n2 : 5.83 131.82 8.24 0.00 0.00 858087.84 83482.78 987274.63 00:06:19.981 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x8000 length 0x8000 00:06:19.981 Nvme2n2 : 5.94 120.36 7.52 0.00 0.00 921974.41 14518.74 1322818.95 00:06:19.981 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.981 Verification LBA range: start 0x0 length 0x8000 00:06:19.981 Nvme2n3 : 5.90 141.10 8.82 0.00 0.00 778960.80 27424.30 1013085.74 00:06:19.981 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.982 Verification LBA range: start 0x8000 length 0x8000 00:06:19.982 Nvme2n3 : 5.96 120.18 7.51 0.00 0.00 887113.97 13510.50 1632552.17 00:06:19.982 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.982 Verification LBA range: start 0x0 length 0x2000 00:06:19.982 Nvme3n1 : 5.93 155.11 9.69 0.00 0.00 687513.83 1512.37 1058255.16 00:06:19.982 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.982 Verification LBA range: start 0x2000 length 0x2000 00:06:19.982 Nvme3n1 : 6.10 192.28 12.02 0.00 0.00 541210.47 351.31 1935832.62 00:06:19.982 [2024-11-28T09:38:58.862Z] =================================================================================================================== 00:06:19.982 [2024-11-28T09:38:58.862Z] Total : 1571.97 98.25 0.00 0.00 865827.70 351.31 1935832.62 00:06:21.434 00:06:21.434 real 0m8.340s 00:06:21.434 user 0m15.775s 00:06:21.434 sys 0m0.250s 00:06:21.434 ************************************ 00:06:21.434 END TEST bdev_verify_big_io 00:06:21.434 ************************************ 00:06:21.434 09:38:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.434 09:38:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:21.434 09:39:00 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:21.434 09:39:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:21.434 09:39:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.434 09:39:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.434 ************************************ 00:06:21.434 START TEST bdev_write_zeroes 00:06:21.434 ************************************ 00:06:21.434 09:39:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:21.434 [2024-11-28 09:39:00.126126] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:21.434 [2024-11-28 09:39:00.126262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:06:21.434 [2024-11-28 09:39:00.284916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.695 [2024-11-28 09:39:00.406144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.266 Running I/O for 1 seconds... 00:06:23.204 59132.00 IOPS, 230.98 MiB/s 00:06:23.204 Latency(us) 00:06:23.204 [2024-11-28T09:39:02.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.204 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:23.204 Nvme0n1 : 1.02 9820.49 38.36 0.00 0.00 13005.94 5973.86 24298.73 00:06:23.204 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:23.204 Nvme1n1 : 1.02 9813.25 38.33 0.00 0.00 12996.89 9225.45 21576.47 00:06:23.204 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:23.204 Nvme2n1 : 1.03 9802.20 38.29 0.00 0.00 12973.85 8721.33 20568.22 00:06:23.204 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:23.204 Nvme2n2 : 1.03 9790.99 38.25 0.00 0.00 12966.78 8217.21 20669.05 00:06:23.204 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:23.204 Nvme2n3 : 1.03 9779.97 38.20 0.00 0.00 12962.33 8721.33 20467.40 00:06:23.204 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:23.204 Nvme3n1 : 1.03 9769.03 38.16 0.00 0.00 12942.18 7965.14 21979.77 00:06:23.204 [2024-11-28T09:39:02.084Z] =================================================================================================================== 00:06:23.204 [2024-11-28T09:39:02.084Z] Total : 58775.92 229.59 0.00 0.00 12974.66 5973.86 24298.73 00:06:24.147 00:06:24.147 real 0m2.821s 00:06:24.147 user 0m2.455s 00:06:24.147 sys 0m0.245s 00:06:24.147 09:39:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.147 ************************************ 00:06:24.147 END TEST bdev_write_zeroes 00:06:24.147 ************************************ 00:06:24.147 09:39:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:24.148 09:39:02 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.148 09:39:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:24.148 09:39:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.148 09:39:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.148 ************************************ 00:06:24.148 START TEST bdev_json_nonenclosed 00:06:24.148 ************************************ 00:06:24.148 09:39:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.148 [2024-11-28 09:39:03.018141] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.148 [2024-11-28 09:39:03.018303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:06:24.407 [2024-11-28 09:39:03.180738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.668 [2024-11-28 09:39:03.299107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.668 [2024-11-28 09:39:03.299227] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:24.668 [2024-11-28 09:39:03.299247] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:24.668 [2024-11-28 09:39:03.299257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.668 00:06:24.668 real 0m0.537s 00:06:24.668 user 0m0.331s 00:06:24.668 sys 0m0.101s 00:06:24.668 ************************************ 00:06:24.668 END TEST bdev_json_nonenclosed 00:06:24.668 ************************************ 00:06:24.668 09:39:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.668 09:39:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:24.668 09:39:03 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.668 09:39:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:24.668 09:39:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.668 09:39:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.668 ************************************ 00:06:24.668 START TEST bdev_json_nonarray 00:06:24.668 ************************************ 00:06:24.668 09:39:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.929 [2024-11-28 09:39:03.594275] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.929 [2024-11-28 09:39:03.594388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:06:24.929 [2024-11-28 09:39:03.750895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.191 [2024-11-28 09:39:03.844168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.191 [2024-11-28 09:39:03.844248] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:25.191 [2024-11-28 09:39:03.844265] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:25.191 [2024-11-28 09:39:03.844274] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.191 00:06:25.191 real 0m0.489s 00:06:25.191 user 0m0.296s 00:06:25.191 sys 0m0.089s 00:06:25.191 09:39:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.191 ************************************ 00:06:25.191 END TEST bdev_json_nonarray 00:06:25.191 ************************************ 00:06:25.191 09:39:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:25.191 09:39:04 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:25.453 09:39:04 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:25.453 09:39:04 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:25.453 09:39:04 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:25.453 09:39:04 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:25.453 00:06:25.453 real 0m35.737s 00:06:25.453 user 0m54.684s 00:06:25.453 sys 0m5.259s 00:06:25.453 09:39:04 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.453 ************************************ 00:06:25.453 END TEST blockdev_nvme 00:06:25.453 ************************************ 00:06:25.453 09:39:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.453 09:39:04 -- spdk/autotest.sh@209 -- # uname -s 00:06:25.453 09:39:04 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:25.453 09:39:04 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:25.453 09:39:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.453 09:39:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.453 09:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:25.453 ************************************ 00:06:25.453 START TEST blockdev_nvme_gpt 00:06:25.453 ************************************ 00:06:25.453 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:25.453 * Looking for test storage... 00:06:25.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:25.453 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.453 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.453 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.453 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.453 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.454 09:39:04 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.454 --rc genhtml_branch_coverage=1 00:06:25.454 --rc genhtml_function_coverage=1 00:06:25.454 --rc genhtml_legend=1 00:06:25.454 --rc geninfo_all_blocks=1 00:06:25.454 --rc geninfo_unexecuted_blocks=1 00:06:25.454 00:06:25.454 ' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.454 --rc genhtml_branch_coverage=1 00:06:25.454 --rc genhtml_function_coverage=1 00:06:25.454 --rc genhtml_legend=1 00:06:25.454 --rc geninfo_all_blocks=1 00:06:25.454 --rc geninfo_unexecuted_blocks=1 00:06:25.454 00:06:25.454 ' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.454 --rc genhtml_branch_coverage=1 00:06:25.454 --rc genhtml_function_coverage=1 00:06:25.454 --rc genhtml_legend=1 00:06:25.454 --rc geninfo_all_blocks=1 00:06:25.454 --rc geninfo_unexecuted_blocks=1 00:06:25.454 00:06:25.454 ' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.454 --rc genhtml_branch_coverage=1 00:06:25.454 --rc genhtml_function_coverage=1 00:06:25.454 --rc genhtml_legend=1 00:06:25.454 --rc geninfo_all_blocks=1 00:06:25.454 --rc geninfo_unexecuted_blocks=1 00:06:25.454 00:06:25.454 ' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60688 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60688 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60688 ']' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.454 09:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:25.454 09:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:25.716 [2024-11-28 09:39:04.355046] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:25.716 [2024-11-28 09:39:04.355177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60688 ] 00:06:25.716 [2024-11-28 09:39:04.513020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.977 [2024-11-28 09:39:04.608996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.548 09:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.548 09:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:26.548 09:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:26.548 09:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:26.548 09:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:26.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:26.808 Waiting for block devices as requested 00:06:27.071 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.071 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.071 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.071 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:32.348 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:32.349 BYT; 00:06:32.349 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:32.349 BYT; 00:06:32.349 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:32.349 09:39:11 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:32.349 09:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:33.281 The operation has completed successfully. 00:06:33.281 09:39:12 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:34.657 The operation has completed successfully. 00:06:34.657 09:39:13 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:34.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.225 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.225 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.225 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.225 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.225 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:35.225 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.225 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.485 [] 00:06:35.485 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.485 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:35.485 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:35.485 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:35.485 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.486 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:35.486 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.486 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.748 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:35.748 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:35.749 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ccf8c0b8-6a07-4326-839c-b2318636a8f6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ccf8c0b8-6a07-4326-839c-b2318636a8f6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "808b796c-3867-4e7b-be86-485a55358a53"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "808b796c-3867-4e7b-be86-485a55358a53",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "43864b8b-7f9a-4ac1-9666-9e7343402a64"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43864b8b-7f9a-4ac1-9666-9e7343402a64",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "eb7ab514-bd0b-4718-a655-fb68174485ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eb7ab514-bd0b-4718-a655-fb68174485ca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0aaeb8d0-4433-45b2-addb-4d2f6c55746c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0aaeb8d0-4433-45b2-addb-4d2f6c55746c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:35.749 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:35.749 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:35.749 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:35.749 09:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60688 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60688 ']' 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60688 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60688 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60688' 00:06:35.749 killing process with pid 60688 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60688 00:06:35.749 09:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60688 00:06:37.739 09:39:16 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:37.739 09:39:16 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.739 09:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:37.739 09:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.740 09:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:37.740 ************************************ 00:06:37.740 START TEST bdev_hello_world 00:06:37.740 ************************************ 00:06:37.740 09:39:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.740 [2024-11-28 09:39:16.296634] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:37.740 [2024-11-28 09:39:16.296751] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61316 ] 00:06:37.740 [2024-11-28 09:39:16.458206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.740 [2024-11-28 09:39:16.553923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.314 [2024-11-28 09:39:17.139804] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:38.314 [2024-11-28 09:39:17.139866] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:38.314 [2024-11-28 09:39:17.139894] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:38.314 [2024-11-28 09:39:17.142704] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:38.314 [2024-11-28 09:39:17.143510] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:38.314 [2024-11-28 09:39:17.143544] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:38.314 [2024-11-28 09:39:17.144185] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:38.314 00:06:38.314 [2024-11-28 09:39:17.144219] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:39.259 00:06:39.259 real 0m1.704s 00:06:39.259 user 0m1.378s 00:06:39.259 sys 0m0.216s 00:06:39.259 ************************************ 00:06:39.259 END TEST bdev_hello_world 00:06:39.259 ************************************ 00:06:39.259 09:39:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.259 09:39:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:39.259 09:39:17 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:39.259 09:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.259 09:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.259 09:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.259 ************************************ 00:06:39.259 START TEST bdev_bounds 00:06:39.259 ************************************ 00:06:39.259 09:39:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61353 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.259 Process bdevio pid: 61353 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61353' 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61353 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61353 ']' 00:06:39.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.259 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:39.259 [2024-11-28 09:39:18.069229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:39.259 [2024-11-28 09:39:18.069347] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61353 ] 00:06:39.520 [2024-11-28 09:39:18.222670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.520 [2024-11-28 09:39:18.321201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.520 [2024-11-28 09:39:18.321374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.520 [2024-11-28 09:39:18.321460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.093 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.093 09:39:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:40.093 09:39:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:40.355 I/O targets: 00:06:40.355 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:40.355 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:40.355 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:40.355 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:40.355 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:40.355 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:40.355 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:40.355 00:06:40.355 00:06:40.355 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.355 http://cunit.sourceforge.net/ 00:06:40.355 00:06:40.355 00:06:40.355 Suite: bdevio tests on: Nvme3n1 00:06:40.355 Test: blockdev write read block ...passed 00:06:40.355 Test: blockdev write zeroes read block ...passed 00:06:40.355 Test: blockdev write zeroes read no split ...passed 00:06:40.355 Test: blockdev write zeroes read split ...passed 00:06:40.355 Test: blockdev write zeroes read split partial ...passed 00:06:40.355 Test: blockdev reset ...[2024-11-28 09:39:19.035191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:40.355 [2024-11-28 09:39:19.039268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:40.355 passed 00:06:40.355 Test: blockdev write read 8 blocks ...passed 00:06:40.355 Test: blockdev write read size > 128k ...passed 00:06:40.355 Test: blockdev write read invalid size ...passed 00:06:40.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.355 Test: blockdev write read max offset ...passed 00:06:40.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.355 Test: blockdev writev readv 8 blocks ...passed 00:06:40.355 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.355 Test: blockdev writev readv block ...passed 00:06:40.355 Test: blockdev writev readv size > 128k ...passed 00:06:40.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.355 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.057026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba604000 len:0x1000 00:06:40.355 [2024-11-28 09:39:19.057149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.355 passed 00:06:40.355 Test: blockdev nvme passthru rw ...passed 00:06:40.355 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:39:19.059531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.355 passed 00:06:40.355 Test: blockdev nvme admin passthru ...[2024-11-28 09:39:19.059615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.355 passed 00:06:40.355 Test: blockdev copy ...passed 00:06:40.355 Suite: bdevio tests on: Nvme2n3 00:06:40.355 Test: blockdev write read block ...passed 00:06:40.355 Test: blockdev write zeroes read block ...passed 00:06:40.355 Test: blockdev write zeroes read no split ...passed 00:06:40.355 Test: blockdev write zeroes read split ...passed 00:06:40.355 Test: blockdev write zeroes read split partial ...passed 00:06:40.355 Test: blockdev reset ...[2024-11-28 09:39:19.118315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.355 [2024-11-28 09:39:19.121462] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:40.355 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.355 passed 00:06:40.355 Test: blockdev write read size > 128k ...passed 00:06:40.355 Test: blockdev write read invalid size ...passed 00:06:40.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.355 Test: blockdev write read max offset ...passed 00:06:40.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.355 Test: blockdev writev readv 8 blocks ...passed 00:06:40.355 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.355 Test: blockdev writev readv block ...passed 00:06:40.355 Test: blockdev writev readv size > 128k ...passed 00:06:40.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.355 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.134862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba602000 len:0x1000 00:06:40.355 passed 00:06:40.355 Test: blockdev nvme passthru rw ...[2024-11-28 09:39:19.135437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.355 passed 00:06:40.355 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:39:19.136861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.355 passed 00:06:40.355 Test: blockdev nvme admin passthru ...passed 00:06:40.355 Test: blockdev copy ...[2024-11-28 09:39:19.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.355 passed 00:06:40.355 Suite: bdevio tests on: Nvme2n2 00:06:40.355 Test: blockdev write read block ...passed 00:06:40.355 Test: blockdev write zeroes read block ...passed 00:06:40.355 Test: blockdev write zeroes read no split ...passed 00:06:40.355 Test: blockdev write zeroes read split ...passed 00:06:40.355 Test: blockdev write zeroes read split partial ...passed 00:06:40.355 Test: blockdev reset ...[2024-11-28 09:39:19.187363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.355 [2024-11-28 09:39:19.190367] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:40.356 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.356 passed 00:06:40.356 Test: blockdev write read size > 128k ...passed 00:06:40.356 Test: blockdev write read invalid size ...passed 00:06:40.356 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.356 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.356 Test: blockdev write read max offset ...passed 00:06:40.356 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.356 Test: blockdev writev readv 8 blocks ...passed 00:06:40.356 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.356 Test: blockdev writev readv block ...passed 00:06:40.356 Test: blockdev writev readv size > 128k ...passed 00:06:40.356 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.356 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.208448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1238000 len:0x1000 00:06:40.356 [2024-11-28 09:39:19.208655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.356 passed 00:06:40.356 Test: blockdev nvme passthru rw ...passed 00:06:40.356 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:39:19.211297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.356 passed 00:06:40.356 Test: blockdev nvme admin passthru ...[2024-11-28 09:39:19.211488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.356 passed 00:06:40.356 Test: blockdev copy ...passed 00:06:40.356 Suite: bdevio tests on: Nvme2n1 00:06:40.356 Test: blockdev write read block ...passed 00:06:40.356 Test: blockdev write zeroes read block ...passed 00:06:40.356 Test: blockdev write zeroes read no split ...passed 00:06:40.617 Test: blockdev write zeroes read split ...passed 00:06:40.617 Test: blockdev write zeroes read split partial ...passed 00:06:40.617 Test: blockdev reset ...[2024-11-28 09:39:19.276889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.617 [2024-11-28 09:39:19.280293] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:40.617 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.617 passed 00:06:40.617 Test: blockdev write read size > 128k ...passed 00:06:40.617 Test: blockdev write read invalid size ...passed 00:06:40.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.617 Test: blockdev write read max offset ...passed 00:06:40.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.617 Test: blockdev writev readv 8 blocks ...passed 00:06:40.617 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.617 Test: blockdev writev readv block ...passed 00:06:40.617 Test: blockdev writev readv size > 128k ...passed 00:06:40.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.617 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.298413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1234000 len:0x1000 00:06:40.617 passed 00:06:40.617 Test: blockdev nvme passthru rw ...[2024-11-28 09:39:19.298644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.617 passed 00:06:40.617 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:39:19.300761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:40.617 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:40.618 [2024-11-28 09:39:19.300903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.618 passed 00:06:40.618 Test: blockdev copy ...passed 00:06:40.618 Suite: bdevio tests on: Nvme1n1p2 00:06:40.618 Test: blockdev write read block ...passed 00:06:40.618 Test: blockdev write zeroes read block ...passed 00:06:40.618 Test: blockdev write zeroes read no split ...passed 00:06:40.618 Test: blockdev write zeroes read split ...passed 00:06:40.618 Test: blockdev write zeroes read split partial ...passed 00:06:40.618 Test: blockdev reset ...[2024-11-28 09:39:19.358134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:40.618 [2024-11-28 09:39:19.361487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:40.618 passed 00:06:40.618 Test: blockdev write read 8 blocks ...passed 00:06:40.618 Test: blockdev write read size > 128k ...passed 00:06:40.618 Test: blockdev write read invalid size ...passed 00:06:40.618 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.618 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.618 Test: blockdev write read max offset ...passed 00:06:40.618 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.618 Test: blockdev writev readv 8 blocks ...passed 00:06:40.618 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.618 Test: blockdev writev readv block ...passed 00:06:40.618 Test: blockdev writev readv size > 128k ...passed 00:06:40.618 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.618 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.378241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e1230000 len:0x1000 00:06:40.618 passed 00:06:40.618 Test: blockdev nvme passthru rw ...passed 00:06:40.618 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.618 Test: blockdev nvme admin passthru ...passed 00:06:40.618 Test: blockdev copy ...[2024-11-28 09:39:19.378406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.618 passed 00:06:40.618 Suite: bdevio tests on: Nvme1n1p1 00:06:40.618 Test: blockdev write read block ...passed 00:06:40.618 Test: blockdev write zeroes read block ...passed 00:06:40.618 Test: blockdev write zeroes read no split ...passed 00:06:40.618 Test: blockdev write zeroes read split ...passed 00:06:40.618 Test: blockdev write zeroes read split partial ...passed 00:06:40.618 Test: blockdev reset ...[2024-11-28 09:39:19.428177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:40.618 [2024-11-28 09:39:19.431585] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:40.618 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.618 passed 00:06:40.618 Test: blockdev write read size > 128k ...passed 00:06:40.618 Test: blockdev write read invalid size ...passed 00:06:40.618 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.618 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.618 Test: blockdev write read max offset ...passed 00:06:40.618 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.618 Test: blockdev writev readv 8 blocks ...passed 00:06:40.618 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.618 Test: blockdev writev readv block ...passed 00:06:40.618 Test: blockdev writev readv size > 128k ...passed 00:06:40.618 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.618 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.450442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ba80e000 len:0x1000 00:06:40.618 [2024-11-28 09:39:19.450478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.618 passed 00:06:40.618 Test: blockdev nvme passthru rw ...passed 00:06:40.618 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.618 Test: blockdev nvme admin passthru ...passed 00:06:40.618 Test: blockdev copy ...passed 00:06:40.618 Suite: bdevio tests on: Nvme0n1 00:06:40.618 Test: blockdev write read block ...passed 00:06:40.618 Test: blockdev write zeroes read block ...passed 00:06:40.618 Test: blockdev write zeroes read no split ...passed 00:06:40.618 Test: blockdev write zeroes read split ...passed 00:06:40.880 Test: blockdev write zeroes read split partial ...passed 00:06:40.880 Test: blockdev reset ...[2024-11-28 09:39:19.501663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:40.880 [2024-11-28 09:39:19.505172] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:40.880 Test: blockdev write read 8 blocks ...uccessful. 00:06:40.880 passed 00:06:40.880 Test: blockdev write read size > 128k ...passed 00:06:40.880 Test: blockdev write read invalid size ...passed 00:06:40.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.880 Test: blockdev write read max offset ...passed 00:06:40.880 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.880 Test: blockdev writev readv 8 blocks ...passed 00:06:40.880 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.880 Test: blockdev writev readv block ...passed 00:06:40.880 Test: blockdev writev readv size > 128k ...passed 00:06:40.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.880 Test: blockdev comparev and writev ...[2024-11-28 09:39:19.521277] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:40.880 separate metadata which is not supported yet. 00:06:40.880 passed 00:06:40.880 Test: blockdev nvme passthru rw ...passed 00:06:40.880 Test: blockdev nvme passthru vendor specific ...[2024-11-28 09:39:19.522650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:40.880 [2024-11-28 09:39:19.522682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:40.880 passed 00:06:40.880 Test: blockdev nvme admin passthru ...passed 00:06:40.880 Test: blockdev copy ...passed 00:06:40.880 00:06:40.880 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.880 suites 7 7 n/a 0 0 00:06:40.880 tests 161 161 161 0 0 00:06:40.880 asserts 1025 1025 1025 0 n/a 00:06:40.880 00:06:40.880 Elapsed time = 1.380 seconds 00:06:40.880 0 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61353 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61353 ']' 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61353 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61353 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61353' 00:06:40.880 killing process with pid 61353 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61353 00:06:40.880 09:39:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61353 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:41.449 00:06:41.449 real 0m2.172s 00:06:41.449 user 0m5.484s 00:06:41.449 sys 0m0.297s 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:41.449 ************************************ 00:06:41.449 END TEST bdev_bounds 00:06:41.449 ************************************ 00:06:41.449 09:39:20 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:41.449 09:39:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:41.449 09:39:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.449 09:39:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.449 ************************************ 00:06:41.449 START TEST bdev_nbd 00:06:41.449 ************************************ 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:41.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61407 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61407 /var/tmp/spdk-nbd.sock 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61407 ']' 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.449 09:39:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:41.449 [2024-11-28 09:39:20.286624] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:41.449 [2024-11-28 09:39:20.286709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.709 [2024-11-28 09:39:20.441899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.709 [2024-11-28 09:39:20.538797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.282 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.543 1+0 records in 00:06:42.543 1+0 records out 00:06:42.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641177 s, 6.4 MB/s 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.543 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.804 1+0 records in 00:06:42.804 1+0 records out 00:06:42.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589473 s, 6.9 MB/s 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.804 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.065 1+0 records in 00:06:43.065 1+0 records out 00:06:43.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834977 s, 4.9 MB/s 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.065 09:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.327 1+0 records in 00:06:43.327 1+0 records out 00:06:43.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000891039 s, 4.6 MB/s 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.327 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.588 1+0 records in 00:06:43.588 1+0 records out 00:06:43.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107683 s, 3.8 MB/s 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.588 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.849 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.850 1+0 records in 00:06:43.850 1+0 records out 00:06:43.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116134 s, 3.5 MB/s 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.850 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.110 1+0 records in 00:06:44.110 1+0 records out 00:06:44.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866896 s, 4.7 MB/s 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd0", 00:06:44.110 "bdev_name": "Nvme0n1" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd1", 00:06:44.110 "bdev_name": "Nvme1n1p1" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd2", 00:06:44.110 "bdev_name": "Nvme1n1p2" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd3", 00:06:44.110 "bdev_name": "Nvme2n1" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd4", 00:06:44.110 "bdev_name": "Nvme2n2" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd5", 00:06:44.110 "bdev_name": "Nvme2n3" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd6", 00:06:44.110 "bdev_name": "Nvme3n1" 00:06:44.110 } 00:06:44.110 ]' 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:44.110 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd0", 00:06:44.110 "bdev_name": "Nvme0n1" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd1", 00:06:44.110 "bdev_name": "Nvme1n1p1" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd2", 00:06:44.110 "bdev_name": "Nvme1n1p2" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd3", 00:06:44.110 "bdev_name": "Nvme2n1" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd4", 00:06:44.110 "bdev_name": "Nvme2n2" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd5", 00:06:44.110 "bdev_name": "Nvme2n3" 00:06:44.110 }, 00:06:44.110 { 00:06:44.110 "nbd_device": "/dev/nbd6", 00:06:44.110 "bdev_name": "Nvme3n1" 00:06:44.110 } 00:06:44.110 ]' 00:06:44.111 09:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.371 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.632 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.892 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.154 09:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.415 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.675 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.935 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:45.936 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:46.197 /dev/nbd0 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.197 1+0 records in 00:06:46.197 1+0 records out 00:06:46.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101517 s, 4.0 MB/s 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.197 09:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:46.458 /dev/nbd1 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.458 1+0 records in 00:06:46.458 1+0 records out 00:06:46.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937768 s, 4.4 MB/s 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.458 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.459 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.459 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.459 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.459 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.459 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:46.719 /dev/nbd10 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.719 1+0 records in 00:06:46.719 1+0 records out 00:06:46.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103293 s, 4.0 MB/s 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.719 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:46.981 /dev/nbd11 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.981 1+0 records in 00:06:46.981 1+0 records out 00:06:46.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011302 s, 3.6 MB/s 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.981 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:46.982 /dev/nbd12 00:06:46.982 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.245 1+0 records in 00:06:47.245 1+0 records out 00:06:47.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00163781 s, 2.5 MB/s 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.245 09:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:47.245 /dev/nbd13 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.246 1+0 records in 00:06:47.246 1+0 records out 00:06:47.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128319 s, 3.2 MB/s 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.246 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:47.508 /dev/nbd14 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.508 1+0 records in 00:06:47.508 1+0 records out 00:06:47.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117305 s, 3.5 MB/s 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.508 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.771 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd0", 00:06:47.771 "bdev_name": "Nvme0n1" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd1", 00:06:47.771 "bdev_name": "Nvme1n1p1" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd10", 00:06:47.771 "bdev_name": "Nvme1n1p2" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd11", 00:06:47.771 "bdev_name": "Nvme2n1" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd12", 00:06:47.771 "bdev_name": "Nvme2n2" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd13", 00:06:47.771 "bdev_name": "Nvme2n3" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd14", 00:06:47.771 "bdev_name": "Nvme3n1" 00:06:47.771 } 00:06:47.771 ]' 00:06:47.771 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd0", 00:06:47.771 "bdev_name": "Nvme0n1" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd1", 00:06:47.771 "bdev_name": "Nvme1n1p1" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd10", 00:06:47.771 "bdev_name": "Nvme1n1p2" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd11", 00:06:47.771 "bdev_name": "Nvme2n1" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd12", 00:06:47.771 "bdev_name": "Nvme2n2" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd13", 00:06:47.771 "bdev_name": "Nvme2n3" 00:06:47.771 }, 00:06:47.771 { 00:06:47.771 "nbd_device": "/dev/nbd14", 00:06:47.771 "bdev_name": "Nvme3n1" 00:06:47.772 } 00:06:47.772 ]' 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.772 /dev/nbd1 00:06:47.772 /dev/nbd10 00:06:47.772 /dev/nbd11 00:06:47.772 /dev/nbd12 00:06:47.772 /dev/nbd13 00:06:47.772 /dev/nbd14' 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.772 /dev/nbd1 00:06:47.772 /dev/nbd10 00:06:47.772 /dev/nbd11 00:06:47.772 /dev/nbd12 00:06:47.772 /dev/nbd13 00:06:47.772 /dev/nbd14' 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:47.772 256+0 records in 00:06:47.772 256+0 records out 00:06:47.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638682 s, 164 MB/s 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.772 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.033 256+0 records in 00:06:48.033 256+0 records out 00:06:48.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24041 s, 4.4 MB/s 00:06:48.033 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.033 09:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.295 256+0 records in 00:06:48.295 256+0 records out 00:06:48.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252553 s, 4.2 MB/s 00:06:48.295 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.295 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:48.557 256+0 records in 00:06:48.557 256+0 records out 00:06:48.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207576 s, 5.1 MB/s 00:06:48.557 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.557 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:48.818 256+0 records in 00:06:48.818 256+0 records out 00:06:48.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257149 s, 4.1 MB/s 00:06:48.818 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.818 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:49.078 256+0 records in 00:06:49.078 256+0 records out 00:06:49.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202646 s, 5.2 MB/s 00:06:49.078 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.078 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:49.337 256+0 records in 00:06:49.337 256+0 records out 00:06:49.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151273 s, 6.9 MB/s 00:06:49.337 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:49.337 256+0 records in 00:06:49.337 256+0 records out 00:06:49.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.066545 s, 15.8 MB/s 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.337 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.596 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.596 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.596 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.596 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.597 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.597 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.597 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.597 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.597 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.597 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.855 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.128 09:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.387 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.646 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:50.904 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:51.162 malloc_lvol_verify 00:06:51.162 09:39:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:51.421 6a322edd-8057-41cf-b379-4adb3016b37e 00:06:51.421 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:51.680 ca323b0a-d7af-40e2-b6ca-f39100660abe 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:51.680 /dev/nbd0 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:51.680 mke2fs 1.47.0 (5-Feb-2023) 00:06:51.680 Discarding device blocks: 0/4096 done 00:06:51.680 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:51.680 00:06:51.680 Allocating group tables: 0/1 done 00:06:51.680 Writing inode tables: 0/1 done 00:06:51.680 Creating journal (1024 blocks): done 00:06:51.680 Writing superblocks and filesystem accounting information: 0/1 done 00:06:51.680 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.680 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61407 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61407 ']' 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61407 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61407 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.939 killing process with pid 61407 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61407' 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61407 00:06:51.939 09:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61407 00:06:52.507 09:39:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:52.507 00:06:52.507 real 0m11.154s 00:06:52.507 user 0m15.450s 00:06:52.507 sys 0m3.641s 00:06:52.507 09:39:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.507 09:39:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:52.507 ************************************ 00:06:52.507 END TEST bdev_nbd 00:06:52.507 ************************************ 00:06:52.766 09:39:31 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:52.766 09:39:31 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:52.766 09:39:31 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:52.766 skipping fio tests on NVMe due to multi-ns failures. 00:06:52.766 09:39:31 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:52.766 09:39:31 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:52.766 09:39:31 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:52.766 09:39:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:52.766 09:39:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.766 09:39:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.767 ************************************ 00:06:52.767 START TEST bdev_verify 00:06:52.767 ************************************ 00:06:52.767 09:39:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:52.767 [2024-11-28 09:39:31.508816] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:52.767 [2024-11-28 09:39:31.508950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61825 ] 00:06:53.026 [2024-11-28 09:39:31.667434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.026 [2024-11-28 09:39:31.752444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.026 [2024-11-28 09:39:31.752525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.599 Running I/O for 5 seconds... 00:06:55.931 19904.00 IOPS, 77.75 MiB/s [2024-11-28T09:39:35.750Z] 19872.00 IOPS, 77.62 MiB/s [2024-11-28T09:39:36.694Z] 19669.33 IOPS, 76.83 MiB/s [2024-11-28T09:39:37.638Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-28T09:39:37.638Z] 19251.20 IOPS, 75.20 MiB/s 00:06:58.758 Latency(us) 00:06:58.758 [2024-11-28T09:39:37.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.758 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0xbd0bd 00:06:58.758 Nvme0n1 : 5.08 1361.40 5.32 0.00 0.00 93826.49 16232.76 84289.38 00:06:58.758 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:58.758 Nvme0n1 : 5.07 1362.07 5.32 0.00 0.00 93791.38 16131.94 84692.68 00:06:58.758 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0x4ff80 00:06:58.758 Nvme1n1p1 : 5.08 1360.72 5.32 0.00 0.00 93766.02 18350.08 81062.99 00:06:58.758 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:58.758 Nvme1n1p1 : 5.08 1361.25 5.32 0.00 0.00 93520.40 17946.78 81466.29 00:06:58.758 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0x4ff7f 00:06:58.758 Nvme1n1p2 : 5.08 1359.60 5.31 0.00 0.00 93687.90 21778.12 75820.11 00:06:58.758 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:58.758 Nvme1n1p2 : 5.08 1360.58 5.31 0.00 0.00 93371.45 17946.78 78239.90 00:06:58.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0x80000 00:06:58.758 Nvme2n1 : 5.09 1358.51 5.31 0.00 0.00 93582.81 20870.70 75416.81 00:06:58.758 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x80000 length 0x80000 00:06:58.758 Nvme2n1 : 5.08 1359.95 5.31 0.00 0.00 93251.33 19257.50 73803.62 00:06:58.758 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0x80000 00:06:58.758 Nvme2n2 : 5.09 1357.40 5.30 0.00 0.00 93479.68 19156.68 77836.60 00:06:58.758 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x80000 length 0x80000 00:06:58.758 Nvme2n2 : 5.08 1359.57 5.31 0.00 0.00 93102.00 19156.68 76626.71 00:06:58.758 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0x80000 00:06:58.758 Nvme2n3 : 5.10 1356.39 5.30 0.00 0.00 93351.44 17845.96 81062.99 00:06:58.758 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x80000 length 0x80000 00:06:58.758 Nvme2n3 : 5.09 1358.48 5.31 0.00 0.00 92978.14 17745.13 82676.18 00:06:58.758 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x0 length 0x20000 00:06:58.758 Nvme3n1 : 5.10 1356.04 5.30 0.00 0.00 93190.19 17241.01 83079.48 00:06:58.758 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.758 Verification LBA range: start 0x20000 length 0x20000 00:06:58.758 Nvme3n1 : 5.09 1357.37 5.30 0.00 0.00 92932.12 18249.26 85499.27 00:06:58.758 [2024-11-28T09:39:37.638Z] =================================================================================================================== 00:06:58.758 [2024-11-28T09:39:37.638Z] Total : 19029.35 74.33 0.00 0.00 93416.52 16131.94 85499.27 00:07:00.145 00:07:00.145 real 0m7.154s 00:07:00.145 user 0m13.375s 00:07:00.145 sys 0m0.227s 00:07:00.145 09:39:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.145 ************************************ 00:07:00.145 END TEST bdev_verify 00:07:00.145 ************************************ 00:07:00.145 09:39:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.145 09:39:38 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:00.145 09:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:00.145 09:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.145 09:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:00.145 ************************************ 00:07:00.145 START TEST bdev_verify_big_io 00:07:00.145 ************************************ 00:07:00.145 09:39:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:00.145 [2024-11-28 09:39:38.716342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:00.145 [2024-11-28 09:39:38.716465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61923 ] 00:07:00.145 [2024-11-28 09:39:38.873503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.145 [2024-11-28 09:39:38.969229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.145 [2024-11-28 09:39:38.969232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.087 Running I/O for 5 seconds... 00:07:06.933 1099.00 IOPS, 68.69 MiB/s [2024-11-28T09:39:45.813Z] 2406.50 IOPS, 150.41 MiB/s [2024-11-28T09:39:46.072Z] 3025.33 IOPS, 189.08 MiB/s 00:07:07.192 Latency(us) 00:07:07.192 [2024-11-28T09:39:46.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.192 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0xbd0b 00:07:07.192 Nvme0n1 : 5.86 104.79 6.55 0.00 0.00 1160289.02 21778.12 1309913.40 00:07:07.192 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:07.192 Nvme0n1 : 5.73 100.58 6.29 0.00 0.00 1218327.02 33272.12 1277649.53 00:07:07.192 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0x4ff8 00:07:07.192 Nvme1n1p1 : 5.76 105.57 6.60 0.00 0.00 1123122.54 106470.79 1213121.77 00:07:07.192 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:07.192 Nvme1n1p1 : 5.81 104.38 6.52 0.00 0.00 1145623.52 76223.41 1219574.55 00:07:07.192 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0x4ff7 00:07:07.192 Nvme1n1p2 : 5.86 109.17 6.82 0.00 0.00 1059455.92 100824.62 948557.98 00:07:07.192 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:07.192 Nvme1n1p2 : 5.92 108.16 6.76 0.00 0.00 1074399.07 77030.01 1109877.37 00:07:07.192 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0x8000 00:07:07.192 Nvme2n1 : 5.97 111.43 6.96 0.00 0.00 999132.82 101631.21 961463.53 00:07:07.192 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x8000 length 0x8000 00:07:07.192 Nvme2n1 : 5.92 108.13 6.76 0.00 0.00 1037303.81 109697.18 1129235.69 00:07:07.192 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0x8000 00:07:07.192 Nvme2n2 : 6.06 120.69 7.54 0.00 0.00 900312.23 37305.11 1219574.55 00:07:07.192 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x8000 length 0x8000 00:07:07.192 Nvme2n2 : 6.05 116.39 7.27 0.00 0.00 937285.10 51622.20 1161499.57 00:07:07.192 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0x8000 00:07:07.192 Nvme2n3 : 6.11 119.09 7.44 0.00 0.00 884290.60 35490.26 1922927.06 00:07:07.192 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x8000 length 0x8000 00:07:07.192 Nvme2n3 : 6.10 125.89 7.87 0.00 0.00 842685.70 37910.06 1180857.90 00:07:07.192 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x0 length 0x2000 00:07:07.192 Nvme3n1 : 6.17 143.00 8.94 0.00 0.00 717154.78 765.64 1974549.27 00:07:07.192 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.192 Verification LBA range: start 0x2000 length 0x2000 00:07:07.192 Nvme3n1 : 6.16 145.48 9.09 0.00 0.00 708506.19 806.60 1206669.00 00:07:07.192 [2024-11-28T09:39:46.072Z] =================================================================================================================== 00:07:07.192 [2024-11-28T09:39:46.072Z] Total : 1622.77 101.42 0.00 0.00 965304.22 765.64 1974549.27 00:07:08.569 00:07:08.569 real 0m8.547s 00:07:08.569 user 0m16.220s 00:07:08.569 sys 0m0.222s 00:07:08.569 09:39:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.569 ************************************ 00:07:08.569 END TEST bdev_verify_big_io 00:07:08.569 ************************************ 00:07:08.569 09:39:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 09:39:47 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.569 09:39:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:08.569 09:39:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.569 09:39:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 ************************************ 00:07:08.569 START TEST bdev_write_zeroes 00:07:08.569 ************************************ 00:07:08.569 09:39:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.569 [2024-11-28 09:39:47.314578] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:08.569 [2024-11-28 09:39:47.314669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62032 ] 00:07:08.827 [2024-11-28 09:39:47.462706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.827 [2024-11-28 09:39:47.537756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.396 Running I/O for 1 seconds... 00:07:10.334 64960.00 IOPS, 253.75 MiB/s 00:07:10.334 Latency(us) 00:07:10.334 [2024-11-28T09:39:49.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.334 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme0n1 : 1.03 9225.51 36.04 0.00 0.00 13843.51 9225.45 26214.40 00:07:10.334 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme1n1p1 : 1.03 9214.42 35.99 0.00 0.00 13839.38 10183.29 27222.65 00:07:10.334 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme1n1p2 : 1.03 9203.21 35.95 0.00 0.00 13818.49 9023.80 27021.00 00:07:10.334 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme2n1 : 1.03 9192.88 35.91 0.00 0.00 13805.01 8217.21 26012.75 00:07:10.334 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme2n2 : 1.03 9182.61 35.87 0.00 0.00 13801.99 8318.03 26012.75 00:07:10.334 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme2n3 : 1.03 9172.30 35.83 0.00 0.00 13786.58 10132.87 25206.15 00:07:10.334 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.334 Nvme3n1 : 1.03 9162.09 35.79 0.00 0.00 13784.06 10183.29 26012.75 00:07:10.334 [2024-11-28T09:39:49.214Z] =================================================================================================================== 00:07:10.334 [2024-11-28T09:39:49.214Z] Total : 64353.02 251.38 0.00 0.00 13811.29 8217.21 27222.65 00:07:11.275 00:07:11.275 real 0m2.604s 00:07:11.275 user 0m2.329s 00:07:11.275 sys 0m0.161s 00:07:11.275 09:39:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.275 09:39:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:11.275 ************************************ 00:07:11.275 END TEST bdev_write_zeroes 00:07:11.275 ************************************ 00:07:11.275 09:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.276 09:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:11.276 09:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.276 09:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.276 ************************************ 00:07:11.276 START TEST bdev_json_nonenclosed 00:07:11.276 ************************************ 00:07:11.276 09:39:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.276 [2024-11-28 09:39:49.979229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:11.276 [2024-11-28 09:39:49.979346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62085 ] 00:07:11.276 [2024-11-28 09:39:50.139394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.535 [2024-11-28 09:39:50.233050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.535 [2024-11-28 09:39:50.233124] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:11.535 [2024-11-28 09:39:50.233140] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:11.535 [2024-11-28 09:39:50.233149] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.535 00:07:11.535 real 0m0.492s 00:07:11.535 user 0m0.299s 00:07:11.535 sys 0m0.089s 00:07:11.535 09:39:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.535 09:39:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 ************************************ 00:07:11.535 END TEST bdev_json_nonenclosed 00:07:11.535 ************************************ 00:07:11.796 09:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.796 09:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:11.796 09:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.796 09:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.796 ************************************ 00:07:11.796 START TEST bdev_json_nonarray 00:07:11.796 ************************************ 00:07:11.796 09:39:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.796 [2024-11-28 09:39:50.519930] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:11.796 [2024-11-28 09:39:50.520043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:07:12.057 [2024-11-28 09:39:50.679358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.057 [2024-11-28 09:39:50.773900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.057 [2024-11-28 09:39:50.773980] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:12.057 [2024-11-28 09:39:50.773998] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:12.057 [2024-11-28 09:39:50.774007] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.318 00:07:12.318 real 0m0.488s 00:07:12.318 user 0m0.299s 00:07:12.318 sys 0m0.085s 00:07:12.318 09:39:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.318 ************************************ 00:07:12.318 END TEST bdev_json_nonarray 00:07:12.318 ************************************ 00:07:12.318 09:39:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:12.318 09:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:12.318 09:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:12.318 09:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:12.318 09:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.318 09:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.318 09:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.318 ************************************ 00:07:12.318 START TEST bdev_gpt_uuid 00:07:12.318 ************************************ 00:07:12.318 09:39:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:12.318 09:39:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:12.318 09:39:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62136 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62136 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62136 ']' 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:12.318 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.318 [2024-11-28 09:39:51.075255] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:12.318 [2024-11-28 09:39:51.075399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62136 ] 00:07:12.580 [2024-11-28 09:39:51.232592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.580 [2024-11-28 09:39:51.326935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.152 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.152 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:13.152 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.152 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.152 09:39:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.413 Some configs were skipped because the RPC state that can call them passed over. 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:13.413 { 00:07:13.413 "name": "Nvme1n1p1", 00:07:13.413 "aliases": [ 00:07:13.413 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:13.413 ], 00:07:13.413 "product_name": "GPT Disk", 00:07:13.413 "block_size": 4096, 00:07:13.413 "num_blocks": 655104, 00:07:13.413 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:13.413 "assigned_rate_limits": { 00:07:13.413 "rw_ios_per_sec": 0, 00:07:13.413 "rw_mbytes_per_sec": 0, 00:07:13.413 "r_mbytes_per_sec": 0, 00:07:13.413 "w_mbytes_per_sec": 0 00:07:13.413 }, 00:07:13.413 "claimed": false, 00:07:13.413 "zoned": false, 00:07:13.413 "supported_io_types": { 00:07:13.413 "read": true, 00:07:13.413 "write": true, 00:07:13.413 "unmap": true, 00:07:13.413 "flush": true, 00:07:13.413 "reset": true, 00:07:13.413 "nvme_admin": false, 00:07:13.413 "nvme_io": false, 00:07:13.413 "nvme_io_md": false, 00:07:13.413 "write_zeroes": true, 00:07:13.413 "zcopy": false, 00:07:13.413 "get_zone_info": false, 00:07:13.413 "zone_management": false, 00:07:13.413 "zone_append": false, 00:07:13.413 "compare": true, 00:07:13.413 "compare_and_write": false, 00:07:13.413 "abort": true, 00:07:13.413 "seek_hole": false, 00:07:13.413 "seek_data": false, 00:07:13.413 "copy": true, 00:07:13.413 "nvme_iov_md": false 00:07:13.413 }, 00:07:13.413 "driver_specific": { 00:07:13.413 "gpt": { 00:07:13.413 "base_bdev": "Nvme1n1", 00:07:13.413 "offset_blocks": 256, 00:07:13.413 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:13.413 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:13.413 "partition_name": "SPDK_TEST_first" 00:07:13.413 } 00:07:13.413 } 00:07:13.413 } 00:07:13.413 ]' 00:07:13.413 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:13.674 { 00:07:13.674 "name": "Nvme1n1p2", 00:07:13.674 "aliases": [ 00:07:13.674 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:13.674 ], 00:07:13.674 "product_name": "GPT Disk", 00:07:13.674 "block_size": 4096, 00:07:13.674 "num_blocks": 655103, 00:07:13.674 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:13.674 "assigned_rate_limits": { 00:07:13.674 "rw_ios_per_sec": 0, 00:07:13.674 "rw_mbytes_per_sec": 0, 00:07:13.674 "r_mbytes_per_sec": 0, 00:07:13.674 "w_mbytes_per_sec": 0 00:07:13.674 }, 00:07:13.674 "claimed": false, 00:07:13.674 "zoned": false, 00:07:13.674 "supported_io_types": { 00:07:13.674 "read": true, 00:07:13.674 "write": true, 00:07:13.674 "unmap": true, 00:07:13.674 "flush": true, 00:07:13.674 "reset": true, 00:07:13.674 "nvme_admin": false, 00:07:13.674 "nvme_io": false, 00:07:13.674 "nvme_io_md": false, 00:07:13.674 "write_zeroes": true, 00:07:13.674 "zcopy": false, 00:07:13.674 "get_zone_info": false, 00:07:13.674 "zone_management": false, 00:07:13.674 "zone_append": false, 00:07:13.674 "compare": true, 00:07:13.674 "compare_and_write": false, 00:07:13.674 "abort": true, 00:07:13.674 "seek_hole": false, 00:07:13.674 "seek_data": false, 00:07:13.674 "copy": true, 00:07:13.674 "nvme_iov_md": false 00:07:13.674 }, 00:07:13.674 "driver_specific": { 00:07:13.674 "gpt": { 00:07:13.674 "base_bdev": "Nvme1n1", 00:07:13.674 "offset_blocks": 655360, 00:07:13.674 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:13.674 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:13.674 "partition_name": "SPDK_TEST_second" 00:07:13.674 } 00:07:13.674 } 00:07:13.674 } 00:07:13.674 ]' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62136 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62136 ']' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62136 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62136 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.674 killing process with pid 62136 00:07:13.674 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62136' 00:07:13.675 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62136 00:07:13.675 09:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62136 00:07:15.592 00:07:15.592 real 0m3.065s 00:07:15.592 user 0m3.209s 00:07:15.592 sys 0m0.369s 00:07:15.592 09:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.592 09:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:15.592 ************************************ 00:07:15.592 END TEST bdev_gpt_uuid 00:07:15.592 ************************************ 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:15.592 09:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:15.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:15.854 Waiting for block devices as requested 00:07:15.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.854 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:16.117 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:16.117 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:21.408 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:21.408 09:39:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:21.408 09:39:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:21.408 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:21.408 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:21.408 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:21.408 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:21.408 09:40:00 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:21.408 00:07:21.408 real 0m56.129s 00:07:21.408 user 1m11.100s 00:07:21.408 sys 0m7.857s 00:07:21.408 09:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.408 ************************************ 00:07:21.408 END TEST blockdev_nvme_gpt 00:07:21.408 ************************************ 00:07:21.408 09:40:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.669 09:40:00 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:21.669 09:40:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.669 09:40:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.669 09:40:00 -- common/autotest_common.sh@10 -- # set +x 00:07:21.669 ************************************ 00:07:21.669 START TEST nvme 00:07:21.669 ************************************ 00:07:21.669 09:40:00 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:21.669 * Looking for test storage... 00:07:21.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:21.669 09:40:00 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.669 09:40:00 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.669 09:40:00 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.669 09:40:00 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.669 09:40:00 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.669 09:40:00 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.669 09:40:00 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.669 09:40:00 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.669 09:40:00 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.669 09:40:00 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.669 09:40:00 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.669 09:40:00 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.669 09:40:00 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.669 09:40:00 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.670 09:40:00 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.670 09:40:00 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:21.670 09:40:00 nvme -- scripts/common.sh@345 -- # : 1 00:07:21.670 09:40:00 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.670 09:40:00 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.670 09:40:00 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:21.670 09:40:00 nvme -- scripts/common.sh@353 -- # local d=1 00:07:21.670 09:40:00 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.670 09:40:00 nvme -- scripts/common.sh@355 -- # echo 1 00:07:21.670 09:40:00 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.670 09:40:00 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:21.670 09:40:00 nvme -- scripts/common.sh@353 -- # local d=2 00:07:21.670 09:40:00 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.670 09:40:00 nvme -- scripts/common.sh@355 -- # echo 2 00:07:21.670 09:40:00 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.670 09:40:00 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.670 09:40:00 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.670 09:40:00 nvme -- scripts/common.sh@368 -- # return 0 00:07:21.670 09:40:00 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.670 09:40:00 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.670 --rc genhtml_branch_coverage=1 00:07:21.670 --rc genhtml_function_coverage=1 00:07:21.670 --rc genhtml_legend=1 00:07:21.670 --rc geninfo_all_blocks=1 00:07:21.670 --rc geninfo_unexecuted_blocks=1 00:07:21.670 00:07:21.670 ' 00:07:21.670 09:40:00 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.670 --rc genhtml_branch_coverage=1 00:07:21.670 --rc genhtml_function_coverage=1 00:07:21.670 --rc genhtml_legend=1 00:07:21.670 --rc geninfo_all_blocks=1 00:07:21.670 --rc geninfo_unexecuted_blocks=1 00:07:21.670 00:07:21.670 ' 00:07:21.670 09:40:00 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.670 --rc genhtml_branch_coverage=1 00:07:21.670 --rc genhtml_function_coverage=1 00:07:21.670 --rc genhtml_legend=1 00:07:21.670 --rc geninfo_all_blocks=1 00:07:21.670 --rc geninfo_unexecuted_blocks=1 00:07:21.670 00:07:21.670 ' 00:07:21.670 09:40:00 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.670 --rc genhtml_branch_coverage=1 00:07:21.670 --rc genhtml_function_coverage=1 00:07:21.670 --rc genhtml_legend=1 00:07:21.670 --rc geninfo_all_blocks=1 00:07:21.670 --rc geninfo_unexecuted_blocks=1 00:07:21.670 00:07:21.670 ' 00:07:21.670 09:40:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:22.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.813 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.813 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.813 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.813 09:40:01 nvme -- nvme/nvme.sh@79 -- # uname 00:07:22.813 09:40:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:22.813 09:40:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:22.813 09:40:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1075 -- # stubpid=62770 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:22.813 Waiting for stub to ready for secondary processes... 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62770 ]] 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:22.813 09:40:01 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:22.813 [2024-11-28 09:40:01.578548] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:22.813 [2024-11-28 09:40:01.578666] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:23.756 [2024-11-28 09:40:02.349345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.756 [2024-11-28 09:40:02.443148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.756 [2024-11-28 09:40:02.443311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.756 [2024-11-28 09:40:02.443418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.756 [2024-11-28 09:40:02.458317] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:23.756 [2024-11-28 09:40:02.458348] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.756 [2024-11-28 09:40:02.473422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:23.756 [2024-11-28 09:40:02.473626] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:23.756 [2024-11-28 09:40:02.477606] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.756 [2024-11-28 09:40:02.477931] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:23.756 [2024-11-28 09:40:02.478038] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:23.756 [2024-11-28 09:40:02.482092] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.756 [2024-11-28 09:40:02.482400] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:23.756 [2024-11-28 09:40:02.482516] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:23.756 [2024-11-28 09:40:02.485243] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.756 [2024-11-28 09:40:02.485404] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:23.756 [2024-11-28 09:40:02.485451] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:23.756 [2024-11-28 09:40:02.485483] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:23.756 [2024-11-28 09:40:02.485509] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:23.756 09:40:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:23.756 done. 00:07:23.756 09:40:02 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:23.756 09:40:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:23.756 09:40:02 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:23.756 09:40:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.756 09:40:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.756 ************************************ 00:07:23.756 START TEST nvme_reset 00:07:23.756 ************************************ 00:07:23.756 09:40:02 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:24.018 Initializing NVMe Controllers 00:07:24.018 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:24.018 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:24.018 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:24.018 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:24.018 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:24.018 00:07:24.018 real 0m0.225s 00:07:24.018 user 0m0.078s 00:07:24.018 sys 0m0.096s 00:07:24.018 09:40:02 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.018 09:40:02 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:24.018 ************************************ 00:07:24.018 END TEST nvme_reset 00:07:24.018 ************************************ 00:07:24.018 09:40:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:24.018 09:40:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.018 09:40:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.018 09:40:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.018 ************************************ 00:07:24.018 START TEST nvme_identify 00:07:24.018 ************************************ 00:07:24.018 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:24.018 09:40:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:24.018 09:40:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:24.018 09:40:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:24.018 09:40:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:24.018 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:24.018 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:24.018 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:24.018 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:24.018 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:24.310 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:24.310 09:40:02 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:24.310 09:40:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:24.310 [2024-11-28 09:40:03.093175] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62791 terminated unexpected 00:07:24.310 ===================================================== 00:07:24.310 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:24.310 ===================================================== 00:07:24.310 Controller Capabilities/Features 00:07:24.310 ================================ 00:07:24.310 Vendor ID: 1b36 00:07:24.310 Subsystem Vendor ID: 1af4 00:07:24.310 Serial Number: 12343 00:07:24.310 Model Number: QEMU NVMe Ctrl 00:07:24.310 Firmware Version: 8.0.0 00:07:24.310 Recommended Arb Burst: 6 00:07:24.310 IEEE OUI Identifier: 00 54 52 00:07:24.310 Multi-path I/O 00:07:24.310 May have multiple subsystem ports: No 00:07:24.310 May have multiple controllers: Yes 00:07:24.310 Associated with SR-IOV VF: No 00:07:24.310 Max Data Transfer Size: 524288 00:07:24.310 Max Number of Namespaces: 256 00:07:24.310 Max Number of I/O Queues: 64 00:07:24.310 NVMe Specification Version (VS): 1.4 00:07:24.310 NVMe Specification Version (Identify): 1.4 00:07:24.310 Maximum Queue Entries: 2048 00:07:24.310 Contiguous Queues Required: Yes 00:07:24.310 Arbitration Mechanisms Supported 00:07:24.310 Weighted Round Robin: Not Supported 00:07:24.310 Vendor Specific: Not Supported 00:07:24.310 Reset Timeout: 7500 ms 00:07:24.310 Doorbell Stride: 4 bytes 00:07:24.310 NVM Subsystem Reset: Not Supported 00:07:24.310 Command Sets Supported 00:07:24.310 NVM Command Set: Supported 00:07:24.310 Boot Partition: Not Supported 00:07:24.310 Memory Page Size Minimum: 4096 bytes 00:07:24.310 Memory Page Size Maximum: 65536 bytes 00:07:24.310 Persistent Memory Region: Not Supported 00:07:24.310 Optional Asynchronous Events Supported 00:07:24.310 Namespace Attribute Notices: Supported 00:07:24.310 Firmware Activation Notices: Not Supported 00:07:24.310 ANA Change Notices: Not Supported 00:07:24.310 PLE Aggregate Log Change Notices: Not Supported 00:07:24.310 LBA Status Info Alert Notices: Not Supported 00:07:24.310 EGE Aggregate Log Change Notices: Not Supported 00:07:24.310 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.310 Zone Descriptor Change Notices: Not Supported 00:07:24.310 Discovery Log Change Notices: Not Supported 00:07:24.310 Controller Attributes 00:07:24.310 128-bit Host Identifier: Not Supported 00:07:24.310 Non-Operational Permissive Mode: Not Supported 00:07:24.310 NVM Sets: Not Supported 00:07:24.310 Read Recovery Levels: Not Supported 00:07:24.310 Endurance Groups: Supported 00:07:24.310 Predictable Latency Mode: Not Supported 00:07:24.310 Traffic Based Keep ALive: Not Supported 00:07:24.310 Namespace Granularity: Not Supported 00:07:24.310 SQ Associations: Not Supported 00:07:24.310 UUID List: Not Supported 00:07:24.310 Multi-Domain Subsystem: Not Supported 00:07:24.310 Fixed Capacity Management: Not Supported 00:07:24.310 Variable Capacity Management: Not Supported 00:07:24.310 Delete Endurance Group: Not Supported 00:07:24.310 Delete NVM Set: Not Supported 00:07:24.310 Extended LBA Formats Supported: Supported 00:07:24.310 Flexible Data Placement Supported: Supported 00:07:24.310 00:07:24.310 Controller Memory Buffer Support 00:07:24.310 ================================ 00:07:24.310 Supported: No 00:07:24.310 00:07:24.310 Persistent Memory Region Support 00:07:24.310 ================================ 00:07:24.310 Supported: No 00:07:24.310 00:07:24.310 Admin Command Set Attributes 00:07:24.310 ============================ 00:07:24.310 Security Send/Receive: Not Supported 00:07:24.310 Format NVM: Supported 00:07:24.310 Firmware Activate/Download: Not Supported 00:07:24.310 Namespace Management: Supported 00:07:24.310 Device Self-Test: Not Supported 00:07:24.310 Directives: Supported 00:07:24.310 NVMe-MI: Not Supported 00:07:24.310 Virtualization Management: Not Supported 00:07:24.310 Doorbell Buffer Config: Supported 00:07:24.310 Get LBA Status Capability: Not Supported 00:07:24.310 Command & Feature Lockdown Capability: Not Supported 00:07:24.310 Abort Command Limit: 4 00:07:24.310 Async Event Request Limit: 4 00:07:24.310 Number of Firmware Slots: N/A 00:07:24.310 Firmware Slot 1 Read-Only: N/A 00:07:24.310 Firmware Activation Without Reset: N/A 00:07:24.310 Multiple Update Detection Support: N/A 00:07:24.310 Firmware Update Granularity: No Information Provided 00:07:24.310 Per-Namespace SMART Log: Yes 00:07:24.310 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.310 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:24.310 Command Effects Log Page: Supported 00:07:24.310 Get Log Page Extended Data: Supported 00:07:24.310 Telemetry Log Pages: Not Supported 00:07:24.310 Persistent Event Log Pages: Not Supported 00:07:24.310 Supported Log Pages Log Page: May Support 00:07:24.310 Commands Supported & Effects Log Page: Not Supported 00:07:24.310 Feature Identifiers & Effects Log Page:May Support 00:07:24.310 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.310 Data Area 4 for Telemetry Log: Not Supported 00:07:24.310 Error Log Page Entries Supported: 1 00:07:24.310 Keep Alive: Not Supported 00:07:24.310 00:07:24.310 NVM Command Set Attributes 00:07:24.310 ========================== 00:07:24.310 Submission Queue Entry Size 00:07:24.310 Max: 64 00:07:24.310 Min: 64 00:07:24.310 Completion Queue Entry Size 00:07:24.310 Max: 16 00:07:24.310 Min: 16 00:07:24.310 Number of Namespaces: 256 00:07:24.310 Compare Command: Supported 00:07:24.310 Write Uncorrectable Command: Not Supported 00:07:24.310 Dataset Management Command: Supported 00:07:24.310 Write Zeroes Command: Supported 00:07:24.310 Set Features Save Field: Supported 00:07:24.310 Reservations: Not Supported 00:07:24.310 Timestamp: Supported 00:07:24.310 Copy: Supported 00:07:24.310 Volatile Write Cache: Present 00:07:24.310 Atomic Write Unit (Normal): 1 00:07:24.310 Atomic Write Unit (PFail): 1 00:07:24.310 Atomic Compare & Write Unit: 1 00:07:24.310 Fused Compare & Write: Not Supported 00:07:24.310 Scatter-Gather List 00:07:24.310 SGL Command Set: Supported 00:07:24.310 SGL Keyed: Not Supported 00:07:24.310 SGL Bit Bucket Descriptor: Not Supported 00:07:24.310 SGL Metadata Pointer: Not Supported 00:07:24.310 Oversized SGL: Not Supported 00:07:24.310 SGL Metadata Address: Not Supported 00:07:24.310 SGL Offset: Not Supported 00:07:24.310 Transport SGL Data Block: Not Supported 00:07:24.310 Replay Protected Memory Block: Not Supported 00:07:24.310 00:07:24.310 Firmware Slot Information 00:07:24.310 ========================= 00:07:24.310 Active slot: 1 00:07:24.310 Slot 1 Firmware Revision: 1.0 00:07:24.310 00:07:24.311 00:07:24.311 Commands Supported and Effects 00:07:24.311 ============================== 00:07:24.311 Admin Commands 00:07:24.311 -------------- 00:07:24.311 Delete I/O Submission Queue (00h): Supported 00:07:24.311 Create I/O Submission Queue (01h): Supported 00:07:24.311 Get Log Page (02h): Supported 00:07:24.311 Delete I/O Completion Queue (04h): Supported 00:07:24.311 Create I/O Completion Queue (05h): Supported 00:07:24.311 Identify (06h): Supported 00:07:24.311 Abort (08h): Supported 00:07:24.311 Set Features (09h): Supported 00:07:24.311 Get Features (0Ah): Supported 00:07:24.311 Asynchronous Event Request (0Ch): Supported 00:07:24.311 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.311 Directive Send (19h): Supported 00:07:24.311 Directive Receive (1Ah): Supported 00:07:24.311 Virtualization Management (1Ch): Supported 00:07:24.311 Doorbell Buffer Config (7Ch): Supported 00:07:24.311 Format NVM (80h): Supported LBA-Change 00:07:24.311 I/O Commands 00:07:24.311 ------------ 00:07:24.311 Flush (00h): Supported LBA-Change 00:07:24.311 Write (01h): Supported LBA-Change 00:07:24.311 Read (02h): Supported 00:07:24.311 Compare (05h): Supported 00:07:24.311 Write Zeroes (08h): Supported LBA-Change 00:07:24.311 Dataset Management (09h): Supported LBA-Change 00:07:24.311 Unknown (0Ch): Supported 00:07:24.311 Unknown (12h): Supported 00:07:24.311 Copy (19h): Supported LBA-Change 00:07:24.311 Unknown (1Dh): Supported LBA-Change 00:07:24.311 00:07:24.311 Error Log 00:07:24.311 ========= 00:07:24.311 00:07:24.311 Arbitration 00:07:24.311 =========== 00:07:24.311 Arbitration Burst: no limit 00:07:24.311 00:07:24.311 Power Management 00:07:24.311 ================ 00:07:24.311 Number of Power States: 1 00:07:24.311 Current Power State: Power State #0 00:07:24.311 Power State #0: 00:07:24.311 Max Power: 25.00 W 00:07:24.311 Non-Operational State: Operational 00:07:24.311 Entry Latency: 16 microseconds 00:07:24.311 Exit Latency: 4 microseconds 00:07:24.311 Relative Read Throughput: 0 00:07:24.311 Relative Read Latency: 0 00:07:24.311 Relative Write Throughput: 0 00:07:24.311 Relative Write Latency: 0 00:07:24.311 Idle Power: Not Reported 00:07:24.311 Active Power: Not Reported 00:07:24.311 Non-Operational Permissive Mode: Not Supported 00:07:24.311 00:07:24.311 Health Information 00:07:24.311 ================== 00:07:24.311 Critical Warnings: 00:07:24.311 Available Spare Space: OK 00:07:24.311 Temperature: OK 00:07:24.311 Device Reliability: OK 00:07:24.311 Read Only: No 00:07:24.311 Volatile Memory Backup: OK 00:07:24.311 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.311 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.311 Available Spare: 0% 00:07:24.311 Available Spare Threshold: 0% 00:07:24.311 Life Percentage Used: 0% 00:07:24.311 Data Units Read: 825 00:07:24.311 Data Units Written: 754 00:07:24.311 Host Read Commands: 36161 00:07:24.311 Host Write Commands: 35584 00:07:24.311 Controller Busy Time: 0 minutes 00:07:24.311 Power Cycles: 0 00:07:24.311 Power On Hours: 0 hours 00:07:24.311 Unsafe Shutdowns: 0 00:07:24.311 Unrecoverable Media Errors: 0 00:07:24.311 Lifetime Error Log Entries: 0 00:07:24.311 Warning Temperature Time: 0 minutes 00:07:24.311 Critical Temperature Time: 0 minutes 00:07:24.311 00:07:24.311 Number of Queues 00:07:24.311 ================ 00:07:24.311 Number of I/O Submission Queues: 64 00:07:24.311 Number of I/O Completion Queues: 64 00:07:24.311 00:07:24.311 ZNS Specific Controller Data 00:07:24.311 ============================ 00:07:24.311 Zone Append Size Limit: 0 00:07:24.311 00:07:24.311 00:07:24.311 Active Namespaces 00:07:24.311 ================= 00:07:24.311 Namespace ID:1 00:07:24.311 Error Recovery Timeout: Unlimited 00:07:24.311 Command Set Identifier: NVM (00h) 00:07:24.311 Deallocate: Supported 00:07:24.311 Deallocated/Unwritten Error: Supported 00:07:24.311 Deallocated Read Value: All 0x00 00:07:24.311 Deallocate in Write Zeroes: Not Supported 00:07:24.311 Deallocated Guard Field: 0xFFFF 00:07:24.311 Flush: Supported 00:07:24.311 Reservation: Not Supported 00:07:24.311 Namespace Sharing Capabilities: Multiple Controllers 00:07:24.311 Size (in LBAs): 262144 (1GiB) 00:07:24.311 Capacity (in LBAs): 262144 (1GiB) 00:07:24.311 Utilization (in LBAs): 262144 (1GiB) 00:07:24.311 Thin Provisioning: Not Supported 00:07:24.311 Per-NS Atomic Units: No 00:07:24.311 Maximum Single Source Range Length: 128 00:07:24.311 Maximum Copy Length: 128 00:07:24.311 Maximum Source Range Count: 128 00:07:24.311 NGUID/EUI64 Never Reused: No 00:07:24.311 Namespace Write Protected: No 00:07:24.311 Endurance group ID: 1 00:07:24.311 Number of LBA Formats: 8 00:07:24.311 Current LBA Format: LBA Format #04 00:07:24.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.311 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.311 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.311 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.311 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.311 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.311 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.311 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.311 00:07:24.311 Get Feature FDP: 00:07:24.311 ================ 00:07:24.311 Enabled: Yes 00:07:24.311 FDP configuration index: 0 00:07:24.311 00:07:24.311 FDP configurations log page 00:07:24.311 =========================== 00:07:24.311 Number of FDP configurations: 1 00:07:24.311 Version: 0 00:07:24.311 Size: 112 00:07:24.311 FDP Configuration Descriptor: 0 00:07:24.311 Descriptor Size: 96 00:07:24.311 Reclaim Group Identifier format: 2 00:07:24.311 FDP Volatile Write Cache: Not Present 00:07:24.311 FDP Configuration: Valid 00:07:24.311 Vendor Specific Size: 0 00:07:24.311 Number of Reclaim Groups: 2 00:07:24.311 Number of Recalim Unit Handles: 8 00:07:24.311 Max Placement Identifiers: 128 00:07:24.311 Number of Namespaces Suppprted: 256 00:07:24.311 Reclaim unit Nominal Size: 6000000 bytes 00:07:24.311 Estimated Reclaim Unit Time Limit: Not Reported 00:07:24.311 RUH Desc #000: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #001: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #002: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #003: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #004: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #005: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #006: RUH Type: Initially Isolated 00:07:24.311 RUH Desc #007: RUH Type: Initially Isolated 00:07:24.311 00:07:24.311 FDP reclaim unit handle usage log page 00:07:24.311 ==================================[2024-11-28 09:40:03.094924] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62791 terminated unexpected 00:07:24.311 ==== 00:07:24.311 Number of Reclaim Unit Handles: 8 00:07:24.311 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:24.311 RUH Usage Desc #001: RUH Attributes: Unused 00:07:24.311 RUH Usage Desc #002: RUH Attributes: Unused 00:07:24.311 RUH Usage Desc #003: RUH Attributes: Unused 00:07:24.311 RUH Usage Desc #004: RUH Attributes: Unused 00:07:24.311 RUH Usage Desc #005: RUH Attributes: Unused 00:07:24.311 RUH Usage Desc #006: RUH Attributes: Unused 00:07:24.311 RUH Usage Desc #007: RUH Attributes: Unused 00:07:24.311 00:07:24.311 FDP statistics log page 00:07:24.311 ======================= 00:07:24.311 Host bytes with metadata written: 481402880 00:07:24.311 Media bytes with metadata written: 481456128 00:07:24.311 Media bytes erased: 0 00:07:24.311 00:07:24.311 FDP events log page 00:07:24.311 =================== 00:07:24.311 Number of FDP events: 0 00:07:24.311 00:07:24.311 NVM Specific Namespace Data 00:07:24.311 =========================== 00:07:24.311 Logical Block Storage Tag Mask: 0 00:07:24.311 Protection Information Capabilities: 00:07:24.311 16b Guard Protection Information Storage Tag Support: No 00:07:24.311 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.311 Storage Tag Check Read Support: No 00:07:24.311 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.311 ===================================================== 00:07:24.311 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:24.311 ===================================================== 00:07:24.311 Controller Capabilities/Features 00:07:24.312 ================================ 00:07:24.312 Vendor ID: 1b36 00:07:24.312 Subsystem Vendor ID: 1af4 00:07:24.312 Serial Number: 12340 00:07:24.312 Model Number: QEMU NVMe Ctrl 00:07:24.312 Firmware Version: 8.0.0 00:07:24.312 Recommended Arb Burst: 6 00:07:24.312 IEEE OUI Identifier: 00 54 52 00:07:24.312 Multi-path I/O 00:07:24.312 May have multiple subsystem ports: No 00:07:24.312 May have multiple controllers: No 00:07:24.312 Associated with SR-IOV VF: No 00:07:24.312 Max Data Transfer Size: 524288 00:07:24.312 Max Number of Namespaces: 256 00:07:24.312 Max Number of I/O Queues: 64 00:07:24.312 NVMe Specification Version (VS): 1.4 00:07:24.312 NVMe Specification Version (Identify): 1.4 00:07:24.312 Maximum Queue Entries: 2048 00:07:24.312 Contiguous Queues Required: Yes 00:07:24.312 Arbitration Mechanisms Supported 00:07:24.312 Weighted Round Robin: Not Supported 00:07:24.312 Vendor Specific: Not Supported 00:07:24.312 Reset Timeout: 7500 ms 00:07:24.312 Doorbell Stride: 4 bytes 00:07:24.312 NVM Subsystem Reset: Not Supported 00:07:24.312 Command Sets Supported 00:07:24.312 NVM Command Set: Supported 00:07:24.312 Boot Partition: Not Supported 00:07:24.312 Memory Page Size Minimum: 4096 bytes 00:07:24.312 Memory Page Size Maximum: 65536 bytes 00:07:24.312 Persistent Memory Region: Not Supported 00:07:24.312 Optional Asynchronous Events Supported 00:07:24.312 Namespace Attribute Notices: Supported 00:07:24.312 Firmware Activation Notices: Not Supported 00:07:24.312 ANA Change Notices: Not Supported 00:07:24.312 PLE Aggregate Log Change Notices: Not Supported 00:07:24.312 LBA Status Info Alert Notices: Not Supported 00:07:24.312 EGE Aggregate Log Change Notices: Not Supported 00:07:24.312 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.312 Zone Descriptor Change Notices: Not Supported 00:07:24.312 Discovery Log Change Notices: Not Supported 00:07:24.312 Controller Attributes 00:07:24.312 128-bit Host Identifier: Not Supported 00:07:24.312 Non-Operational Permissive Mode: Not Supported 00:07:24.312 NVM Sets: Not Supported 00:07:24.312 Read Recovery Levels: Not Supported 00:07:24.312 Endurance Groups: Not Supported 00:07:24.312 Predictable Latency Mode: Not Supported 00:07:24.312 Traffic Based Keep ALive: Not Supported 00:07:24.312 Namespace Granularity: Not Supported 00:07:24.312 SQ Associations: Not Supported 00:07:24.312 UUID List: Not Supported 00:07:24.312 Multi-Domain Subsystem: Not Supported 00:07:24.312 Fixed Capacity Management: Not Supported 00:07:24.312 Variable Capacity Management: Not Supported 00:07:24.312 Delete Endurance Group: Not Supported 00:07:24.312 Delete NVM Set: Not Supported 00:07:24.312 Extended LBA Formats Supported: Supported 00:07:24.312 Flexible Data Placement Supported: Not Supported 00:07:24.312 00:07:24.312 Controller Memory Buffer Support 00:07:24.312 ================================ 00:07:24.312 Supported: No 00:07:24.312 00:07:24.312 Persistent Memory Region Support 00:07:24.312 ================================ 00:07:24.312 Supported: No 00:07:24.312 00:07:24.312 Admin Command Set Attributes 00:07:24.312 ============================ 00:07:24.312 Security Send/Receive: Not Supported 00:07:24.312 Format NVM: Supported 00:07:24.312 Firmware Activate/Download: Not Supported 00:07:24.312 Namespace Management: Supported 00:07:24.312 Device Self-Test: Not Supported 00:07:24.312 Directives: Supported 00:07:24.312 NVMe-MI: Not Supported 00:07:24.312 Virtualization Management: Not Supported 00:07:24.312 Doorbell Buffer Config: Supported 00:07:24.312 Get LBA Status Capability: Not Supported 00:07:24.312 Command & Feature Lockdown Capability: Not Supported 00:07:24.312 Abort Command Limit: 4 00:07:24.312 Async Event Request Limit: 4 00:07:24.312 Number of Firmware Slots: N/A 00:07:24.312 Firmware Slot 1 Read-Only: N/A 00:07:24.312 Firmware Activation Without Reset: N/A 00:07:24.312 Multiple Update Detection Support: N/A 00:07:24.312 Firmware Update Granularity: No Information Provided 00:07:24.312 Per-Namespace SMART Log: Yes 00:07:24.312 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.312 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:24.312 Command Effects Log Page: Supported 00:07:24.312 Get Log Page Extended Data: Supported 00:07:24.312 Telemetry Log Pages: Not Supported 00:07:24.312 Persistent Event Log Pages: Not Supported 00:07:24.312 Supported Log Pages Log Page: May Support 00:07:24.312 Commands Supported & Effects Log Page: Not Supported 00:07:24.312 Feature Identifiers & Effects Log Page:May Support 00:07:24.312 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.312 Data Area 4 for Telemetry Log: Not Supported 00:07:24.312 Error Log Page Entries Supported: 1 00:07:24.312 Keep Alive: Not Supported 00:07:24.312 00:07:24.312 NVM Command Set Attributes 00:07:24.312 ========================== 00:07:24.312 Submission Queue Entry Size 00:07:24.312 Max: 64 00:07:24.312 Min: 64 00:07:24.312 Completion Queue Entry Size 00:07:24.312 Max: 16 00:07:24.312 Min: 16 00:07:24.312 Number of Namespaces: 256 00:07:24.312 Compare Command: Supported 00:07:24.312 Write Uncorrectable Command: Not Supported 00:07:24.312 Dataset Management Command: Supported 00:07:24.312 Write Zeroes Command: Supported 00:07:24.312 Set Features Save Field: Supported 00:07:24.312 Reservations: Not Supported 00:07:24.312 Timestamp: Supported 00:07:24.312 Copy: Supported 00:07:24.312 Volatile Write Cache: Present 00:07:24.312 Atomic Write Unit (Normal): 1 00:07:24.312 Atomic Write Unit (PFail): 1 00:07:24.312 Atomic Compare & Write Unit: 1 00:07:24.312 Fused Compare & Write: Not Supported 00:07:24.312 Scatter-Gather List 00:07:24.312 SGL Command Set: Supported 00:07:24.312 SGL Keyed: Not Supported 00:07:24.312 SGL Bit Bucket Descriptor: Not Supported 00:07:24.312 SGL Metadata Pointer: Not Supported 00:07:24.312 Oversized SGL: Not Supported 00:07:24.312 SGL Metadata Address: Not Supported 00:07:24.312 SGL Offset: Not Supported 00:07:24.312 Transport SGL Data Block: Not Supported 00:07:24.312 Replay Protected Memory Block: Not Supported 00:07:24.312 00:07:24.312 Firmware Slot Information 00:07:24.312 ========================= 00:07:24.312 Active slot: 1 00:07:24.312 Slot 1 Firmware Revision: 1.0 00:07:24.312 00:07:24.312 00:07:24.312 Commands Supported and Effects 00:07:24.312 ============================== 00:07:24.312 Admin Commands 00:07:24.312 -------------- 00:07:24.312 Delete I/O Submission Queue (00h): Supported 00:07:24.312 Create I/O Submission Queue (01h): Supported 00:07:24.312 Get Log Page (02h): Supported 00:07:24.312 Delete I/O Completion Queue (04h): Supported 00:07:24.312 Create I/O Completion Queue (05h): Supported 00:07:24.312 Identify (06h): Supported 00:07:24.312 Abort (08h): Supported 00:07:24.312 Set Features (09h): Supported 00:07:24.312 Get Features (0Ah): Supported 00:07:24.312 Asynchronous Event Request (0Ch): Supported 00:07:24.312 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.312 Directive Send (19h): Supported 00:07:24.312 Directive Receive (1Ah): Supported 00:07:24.312 Virtualization Management (1Ch): Supported 00:07:24.312 Doorbell Buffer Config (7Ch): Supported 00:07:24.312 Format NVM (80h): Supported LBA-Change 00:07:24.312 I/O Commands 00:07:24.312 ------------ 00:07:24.312 Flush (00h): Supported LBA-Change 00:07:24.312 Write (01h): Supported LBA-Change 00:07:24.312 Read (02h): Supported 00:07:24.312 Compare (05h): Supported 00:07:24.312 Write Zeroes (08h): Supported LBA-Change 00:07:24.312 Dataset Management (09h): Supported LBA-Change 00:07:24.312 Unknown (0Ch): Supported 00:07:24.312 Unknown (12h): Supported 00:07:24.312 Copy (19h): Supported LBA-Change 00:07:24.312 Unknown (1Dh): Supported LBA-Change 00:07:24.312 00:07:24.312 Error Log 00:07:24.312 ========= 00:07:24.312 00:07:24.312 Arbitration 00:07:24.312 =========== 00:07:24.312 Arbitration Burst: no limit 00:07:24.312 00:07:24.312 Power Management 00:07:24.312 ================ 00:07:24.312 Number of Power States: 1 00:07:24.312 Current Power State: Power State #0 00:07:24.312 Power State #0: 00:07:24.312 Max Power: 25.00 W 00:07:24.312 Non-Operational State: Operational 00:07:24.312 Entry Latency: 16 microseconds 00:07:24.312 Exit Latency: 4 microseconds 00:07:24.312 Relative Read Throughput: 0 00:07:24.312 Relative Read Latency: 0 00:07:24.312 Relative Write Throughput: 0 00:07:24.312 Relative Write Latency: 0 00:07:24.312 Idle Power: Not Reported 00:07:24.312 Active Power: Not Reported 00:07:24.312 Non-Operational Permissive Mode: Not Supported 00:07:24.312 00:07:24.312 Health Information 00:07:24.312 ================== 00:07:24.312 Critical Warnings: 00:07:24.312 Available Spare Space: OK 00:07:24.313 Temperature: OK 00:07:24.313 Device Reliability: OK 00:07:24.313 Read Only: No 00:07:24.313 Volatile Memory Backup: OK 00:07:24.313 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.313 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.313 Available Spare: 0% 00:07:24.313 Available Spare Threshold: 0% 00:07:24.313 Life Percentage Used: 0% 00:07:24.313 Data Units Read: 654 00:07:24.313 Data Units Written: 582 00:07:24.313 Host Read Commands: 34650 00:07:24.313 Host Write Commands: 34436 00:07:24.313 Controller Busy Time: 0 minutes 00:07:24.313 Power Cycles: 0 00:07:24.313 Power On Hours: 0 hours 00:07:24.313 Unsafe Shutdowns: 0 00:07:24.313 Unrecoverable Media Errors: 0 00:07:24.313 Lifetime Error Log Entries: 0 00:07:24.313 Warning Temperature Time: 0 minutes 00:07:24.313 Critical Temperature Time: 0 minutes 00:07:24.313 00:07:24.313 Number of Queues 00:07:24.313 ================ 00:07:24.313 Number of I/O Submission Queues: 64 00:07:24.313 Number of I/O Completion Queues: 64 00:07:24.313 00:07:24.313 ZNS Specific Controller Data 00:07:24.313 ============================ 00:07:24.313 Zone Append Size Limit: 0 00:07:24.313 00:07:24.313 00:07:24.313 Active Namespaces 00:07:24.313 ================= 00:07:24.313 Namespace ID:1 00:07:24.313 Error Recovery Timeout: Unlimited 00:07:24.313 Command Set Identifier: NVM (00h) 00:07:24.313 Deallocate: Supported 00:07:24.313 Deallocated/Unwritten Error: Supported 00:07:24.313 Deallocated Read Value: All 0x00 00:07:24.313 Deallocate in Write Zeroes: Not Supported 00:07:24.313 Deallocated Guard Field: 0xFFFF 00:07:24.313 Flush: Supported 00:07:24.313 Reservation: Not Supported 00:07:24.313 Metadata Transferred as: Separate Metadata Buffer 00:07:24.313 Namespace Sharing Capabilities: Private 00:07:24.313 Size (in LBAs): 1548666 (5GiB) 00:07:24.313 Capacity (in LBAs): 1548666 (5GiB) 00:07:24.313 Utilization (in LBAs): 1548666 (5GiB) 00:07:24.313 Thin Provisioning: Not Supported 00:07:24.313 Per-NS Atomic Units: No 00:07:24.313 Maximum Single Source Range Length: 128 00:07:24.313 Maximum Copy Length: 128 00:07:24.313 Maximum Source Range Count: 128 00:07:24.313 NGUID/EUI64 Never Reused: No 00:07:24.313 Namespace Write Protected: No 00:07:24.313 Number of LBA Formats: 8 00:07:24.313 Current LBA Format: [2024-11-28 09:40:03.095587] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62791 terminated unexpected 00:07:24.313 LBA Format #07 00:07:24.313 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.313 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.313 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.313 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.313 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.313 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.313 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.313 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.313 00:07:24.313 NVM Specific Namespace Data 00:07:24.313 =========================== 00:07:24.313 Logical Block Storage Tag Mask: 0 00:07:24.313 Protection Information Capabilities: 00:07:24.313 16b Guard Protection Information Storage Tag Support: No 00:07:24.313 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.313 Storage Tag Check Read Support: No 00:07:24.313 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.313 ===================================================== 00:07:24.313 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:24.313 ===================================================== 00:07:24.313 Controller Capabilities/Features 00:07:24.313 ================================ 00:07:24.313 Vendor ID: 1b36 00:07:24.313 Subsystem Vendor ID: 1af4 00:07:24.313 Serial Number: 12341 00:07:24.313 Model Number: QEMU NVMe Ctrl 00:07:24.313 Firmware Version: 8.0.0 00:07:24.313 Recommended Arb Burst: 6 00:07:24.313 IEEE OUI Identifier: 00 54 52 00:07:24.313 Multi-path I/O 00:07:24.313 May have multiple subsystem ports: No 00:07:24.313 May have multiple controllers: No 00:07:24.313 Associated with SR-IOV VF: No 00:07:24.313 Max Data Transfer Size: 524288 00:07:24.313 Max Number of Namespaces: 256 00:07:24.313 Max Number of I/O Queues: 64 00:07:24.313 NVMe Specification Version (VS): 1.4 00:07:24.313 NVMe Specification Version (Identify): 1.4 00:07:24.313 Maximum Queue Entries: 2048 00:07:24.313 Contiguous Queues Required: Yes 00:07:24.313 Arbitration Mechanisms Supported 00:07:24.313 Weighted Round Robin: Not Supported 00:07:24.313 Vendor Specific: Not Supported 00:07:24.313 Reset Timeout: 7500 ms 00:07:24.313 Doorbell Stride: 4 bytes 00:07:24.313 NVM Subsystem Reset: Not Supported 00:07:24.313 Command Sets Supported 00:07:24.313 NVM Command Set: Supported 00:07:24.313 Boot Partition: Not Supported 00:07:24.313 Memory Page Size Minimum: 4096 bytes 00:07:24.313 Memory Page Size Maximum: 65536 bytes 00:07:24.313 Persistent Memory Region: Not Supported 00:07:24.313 Optional Asynchronous Events Supported 00:07:24.313 Namespace Attribute Notices: Supported 00:07:24.313 Firmware Activation Notices: Not Supported 00:07:24.313 ANA Change Notices: Not Supported 00:07:24.313 PLE Aggregate Log Change Notices: Not Supported 00:07:24.313 LBA Status Info Alert Notices: Not Supported 00:07:24.313 EGE Aggregate Log Change Notices: Not Supported 00:07:24.313 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.313 Zone Descriptor Change Notices: Not Supported 00:07:24.313 Discovery Log Change Notices: Not Supported 00:07:24.313 Controller Attributes 00:07:24.313 128-bit Host Identifier: Not Supported 00:07:24.313 Non-Operational Permissive Mode: Not Supported 00:07:24.313 NVM Sets: Not Supported 00:07:24.313 Read Recovery Levels: Not Supported 00:07:24.313 Endurance Groups: Not Supported 00:07:24.313 Predictable Latency Mode: Not Supported 00:07:24.313 Traffic Based Keep ALive: Not Supported 00:07:24.313 Namespace Granularity: Not Supported 00:07:24.313 SQ Associations: Not Supported 00:07:24.313 UUID List: Not Supported 00:07:24.313 Multi-Domain Subsystem: Not Supported 00:07:24.313 Fixed Capacity Management: Not Supported 00:07:24.313 Variable Capacity Management: Not Supported 00:07:24.313 Delete Endurance Group: Not Supported 00:07:24.313 Delete NVM Set: Not Supported 00:07:24.313 Extended LBA Formats Supported: Supported 00:07:24.313 Flexible Data Placement Supported: Not Supported 00:07:24.313 00:07:24.313 Controller Memory Buffer Support 00:07:24.313 ================================ 00:07:24.313 Supported: No 00:07:24.313 00:07:24.313 Persistent Memory Region Support 00:07:24.313 ================================ 00:07:24.313 Supported: No 00:07:24.313 00:07:24.313 Admin Command Set Attributes 00:07:24.313 ============================ 00:07:24.313 Security Send/Receive: Not Supported 00:07:24.313 Format NVM: Supported 00:07:24.313 Firmware Activate/Download: Not Supported 00:07:24.313 Namespace Management: Supported 00:07:24.313 Device Self-Test: Not Supported 00:07:24.313 Directives: Supported 00:07:24.313 NVMe-MI: Not Supported 00:07:24.313 Virtualization Management: Not Supported 00:07:24.313 Doorbell Buffer Config: Supported 00:07:24.313 Get LBA Status Capability: Not Supported 00:07:24.313 Command & Feature Lockdown Capability: Not Supported 00:07:24.313 Abort Command Limit: 4 00:07:24.313 Async Event Request Limit: 4 00:07:24.313 Number of Firmware Slots: N/A 00:07:24.313 Firmware Slot 1 Read-Only: N/A 00:07:24.313 Firmware Activation Without Reset: N/A 00:07:24.313 Multiple Update Detection Support: N/A 00:07:24.313 Firmware Update Granularity: No Information Provided 00:07:24.313 Per-Namespace SMART Log: Yes 00:07:24.313 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.313 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:24.313 Command Effects Log Page: Supported 00:07:24.313 Get Log Page Extended Data: Supported 00:07:24.313 Telemetry Log Pages: Not Supported 00:07:24.313 Persistent Event Log Pages: Not Supported 00:07:24.313 Supported Log Pages Log Page: May Support 00:07:24.313 Commands Supported & Effects Log Page: Not Supported 00:07:24.313 Feature Identifiers & Effects Log Page:May Support 00:07:24.313 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.313 Data Area 4 for Telemetry Log: Not Supported 00:07:24.314 Error Log Page Entries Supported: 1 00:07:24.314 Keep Alive: Not Supported 00:07:24.314 00:07:24.314 NVM Command Set Attributes 00:07:24.314 ========================== 00:07:24.314 Submission Queue Entry Size 00:07:24.314 Max: 64 00:07:24.314 Min: 64 00:07:24.314 Completion Queue Entry Size 00:07:24.314 Max: 16 00:07:24.314 Min: 16 00:07:24.314 Number of Namespaces: 256 00:07:24.314 Compare Command: Supported 00:07:24.314 Write Uncorrectable Command: Not Supported 00:07:24.314 Dataset Management Command: Supported 00:07:24.314 Write Zeroes Command: Supported 00:07:24.314 Set Features Save Field: Supported 00:07:24.314 Reservations: Not Supported 00:07:24.314 Timestamp: Supported 00:07:24.314 Copy: Supported 00:07:24.314 Volatile Write Cache: Present 00:07:24.314 Atomic Write Unit (Normal): 1 00:07:24.314 Atomic Write Unit (PFail): 1 00:07:24.314 Atomic Compare & Write Unit: 1 00:07:24.314 Fused Compare & Write: Not Supported 00:07:24.314 Scatter-Gather List 00:07:24.314 SGL Command Set: Supported 00:07:24.314 SGL Keyed: Not Supported 00:07:24.314 SGL Bit Bucket Descriptor: Not Supported 00:07:24.314 SGL Metadata Pointer: Not Supported 00:07:24.314 Oversized SGL: Not Supported 00:07:24.314 SGL Metadata Address: Not Supported 00:07:24.314 SGL Offset: Not Supported 00:07:24.314 Transport SGL Data Block: Not Supported 00:07:24.314 Replay Protected Memory Block: Not Supported 00:07:24.314 00:07:24.314 Firmware Slot Information 00:07:24.314 ========================= 00:07:24.314 Active slot: 1 00:07:24.314 Slot 1 Firmware Revision: 1.0 00:07:24.314 00:07:24.314 00:07:24.314 Commands Supported and Effects 00:07:24.314 ============================== 00:07:24.314 Admin Commands 00:07:24.314 -------------- 00:07:24.314 Delete I/O Submission Queue (00h): Supported 00:07:24.314 Create I/O Submission Queue (01h): Supported 00:07:24.314 Get Log Page (02h): Supported 00:07:24.314 Delete I/O Completion Queue (04h): Supported 00:07:24.314 Create I/O Completion Queue (05h): Supported 00:07:24.314 Identify (06h): Supported 00:07:24.314 Abort (08h): Supported 00:07:24.314 Set Features (09h): Supported 00:07:24.314 Get Features (0Ah): Supported 00:07:24.314 Asynchronous Event Request (0Ch): Supported 00:07:24.314 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.314 Directive Send (19h): Supported 00:07:24.314 Directive Receive (1Ah): Supported 00:07:24.314 Virtualization Management (1Ch): Supported 00:07:24.314 Doorbell Buffer Config (7Ch): Supported 00:07:24.314 Format NVM (80h): Supported LBA-Change 00:07:24.314 I/O Commands 00:07:24.314 ------------ 00:07:24.314 Flush (00h): Supported LBA-Change 00:07:24.314 Write (01h): Supported LBA-Change 00:07:24.314 Read (02h): Supported 00:07:24.314 Compare (05h): Supported 00:07:24.314 Write Zeroes (08h): Supported LBA-Change 00:07:24.314 Dataset Management (09h): Supported LBA-Change 00:07:24.314 Unknown (0Ch): Supported 00:07:24.314 Unknown (12h): Supported 00:07:24.314 Copy (19h): Supported LBA-Change 00:07:24.314 Unknown (1Dh): Supported LBA-Change 00:07:24.314 00:07:24.314 Error Log 00:07:24.314 ========= 00:07:24.314 00:07:24.314 Arbitration 00:07:24.314 =========== 00:07:24.314 Arbitration Burst: no limit 00:07:24.314 00:07:24.314 Power Management 00:07:24.314 ================ 00:07:24.314 Number of Power States: 1 00:07:24.314 Current Power State: Power State #0 00:07:24.314 Power State #0: 00:07:24.314 Max Power: 25.00 W 00:07:24.314 Non-Operational State: Operational 00:07:24.314 Entry Latency: 16 microseconds 00:07:24.314 Exit Latency: 4 microseconds 00:07:24.314 Relative Read Throughput: 0 00:07:24.314 Relative Read Latency: 0 00:07:24.314 Relative Write Throughput: 0 00:07:24.314 Relative Write Latency: 0 00:07:24.314 Idle Power: Not Reported 00:07:24.314 Active Power: Not Reported 00:07:24.314 Non-Operational Permissive Mode: Not Supported 00:07:24.314 00:07:24.314 Health Information 00:07:24.314 ================== 00:07:24.314 Critical Warnings: 00:07:24.314 Available Spare Space: OK 00:07:24.314 Temperature: OK 00:07:24.314 Device Reliability: OK 00:07:24.314 Read Only: No 00:07:24.314 Volatile Memory Backup: OK 00:07:24.314 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.314 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.314 Available Spare: 0% 00:07:24.314 Available Spare Threshold: 0% 00:07:24.314 Life Percentage Used: 0% 00:07:24.314 Data Units Read: 1007 00:07:24.314 Data Units Written: 868 00:07:24.314 Host Read Commands: 50979 00:07:24.314 Host Write Commands: 49689 00:07:24.314 Controller Busy Time: 0 minutes 00:07:24.314 Power Cycles: 0 00:07:24.314 Power On Hours: 0 hours 00:07:24.314 Unsafe Shutdowns: 0 00:07:24.314 Unrecoverable Media Errors: 0 00:07:24.314 Lifetime Error Log Entries: 0 00:07:24.314 Warning Temperature Time: 0 minutes 00:07:24.314 Critical Temperature Time: 0 minutes 00:07:24.314 00:07:24.314 Number of Queues 00:07:24.314 ================ 00:07:24.314 Number of I/O Submission Queues: 64 00:07:24.314 Number of I/O Completion Queues: 64 00:07:24.314 00:07:24.314 ZNS Specific Controller Data 00:07:24.314 ============================ 00:07:24.314 Zone Append Size Limit: 0 00:07:24.314 00:07:24.314 00:07:24.314 Active Namespaces 00:07:24.314 ================= 00:07:24.314 Namespace ID:1 00:07:24.314 Error Recovery Timeout: Unlimited 00:07:24.314 Command Set Identifier: NVM (00h) 00:07:24.314 Deallocate: Supported 00:07:24.314 Deallocated/Unwritten Error: Supported 00:07:24.314 Deallocated Read Value: All 0x00 00:07:24.314 Deallocate in Write Zeroes: Not Supported 00:07:24.314 Deallocated Guard Field: 0xFFFF 00:07:24.314 Flush: Supported 00:07:24.314 Reservation: Not Supported 00:07:24.314 Namespace Sharing Capabilities: Private 00:07:24.314 Size (in LBAs): 1310720 (5GiB) 00:07:24.314 Capacity (in LBAs): 1310720 (5GiB) 00:07:24.314 Utilization (in LBAs): 1310720 (5GiB) 00:07:24.314 Thin Provisioning: Not Supported 00:07:24.314 Per-NS Atomic Units: No 00:07:24.314 Maximum Single Source Range Length: 128 00:07:24.314 Maximum Copy Length: 128 00:07:24.314 Maximum Source Range Count: 128 00:07:24.314 NGUID/EUI64 Never Reused: No 00:07:24.314 Namespace Write Protected: No 00:07:24.314 Number of LBA Formats: 8 00:07:24.314 Current LBA Format: LBA Format #04 00:07:24.314 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.314 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.314 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.314 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.314 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.314 LBA Forma[2024-11-28 09:40:03.096185] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62791 terminated unexpected 00:07:24.314 t #05: Data Size: 4096 Metadata Size: 8 00:07:24.314 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.314 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.314 00:07:24.314 NVM Specific Namespace Data 00:07:24.314 =========================== 00:07:24.314 Logical Block Storage Tag Mask: 0 00:07:24.314 Protection Information Capabilities: 00:07:24.314 16b Guard Protection Information Storage Tag Support: No 00:07:24.314 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.314 Storage Tag Check Read Support: No 00:07:24.314 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.314 ===================================================== 00:07:24.314 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:24.314 ===================================================== 00:07:24.314 Controller Capabilities/Features 00:07:24.314 ================================ 00:07:24.314 Vendor ID: 1b36 00:07:24.314 Subsystem Vendor ID: 1af4 00:07:24.314 Serial Number: 12342 00:07:24.314 Model Number: QEMU NVMe Ctrl 00:07:24.314 Firmware Version: 8.0.0 00:07:24.314 Recommended Arb Burst: 6 00:07:24.314 IEEE OUI Identifier: 00 54 52 00:07:24.314 Multi-path I/O 00:07:24.315 May have multiple subsystem ports: No 00:07:24.315 May have multiple controllers: No 00:07:24.315 Associated with SR-IOV VF: No 00:07:24.315 Max Data Transfer Size: 524288 00:07:24.315 Max Number of Namespaces: 256 00:07:24.315 Max Number of I/O Queues: 64 00:07:24.315 NVMe Specification Version (VS): 1.4 00:07:24.315 NVMe Specification Version (Identify): 1.4 00:07:24.315 Maximum Queue Entries: 2048 00:07:24.315 Contiguous Queues Required: Yes 00:07:24.315 Arbitration Mechanisms Supported 00:07:24.315 Weighted Round Robin: Not Supported 00:07:24.315 Vendor Specific: Not Supported 00:07:24.315 Reset Timeout: 7500 ms 00:07:24.315 Doorbell Stride: 4 bytes 00:07:24.315 NVM Subsystem Reset: Not Supported 00:07:24.315 Command Sets Supported 00:07:24.315 NVM Command Set: Supported 00:07:24.315 Boot Partition: Not Supported 00:07:24.315 Memory Page Size Minimum: 4096 bytes 00:07:24.315 Memory Page Size Maximum: 65536 bytes 00:07:24.315 Persistent Memory Region: Not Supported 00:07:24.315 Optional Asynchronous Events Supported 00:07:24.315 Namespace Attribute Notices: Supported 00:07:24.315 Firmware Activation Notices: Not Supported 00:07:24.315 ANA Change Notices: Not Supported 00:07:24.315 PLE Aggregate Log Change Notices: Not Supported 00:07:24.315 LBA Status Info Alert Notices: Not Supported 00:07:24.315 EGE Aggregate Log Change Notices: Not Supported 00:07:24.315 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.315 Zone Descriptor Change Notices: Not Supported 00:07:24.315 Discovery Log Change Notices: Not Supported 00:07:24.315 Controller Attributes 00:07:24.315 128-bit Host Identifier: Not Supported 00:07:24.315 Non-Operational Permissive Mode: Not Supported 00:07:24.315 NVM Sets: Not Supported 00:07:24.315 Read Recovery Levels: Not Supported 00:07:24.315 Endurance Groups: Not Supported 00:07:24.315 Predictable Latency Mode: Not Supported 00:07:24.315 Traffic Based Keep ALive: Not Supported 00:07:24.315 Namespace Granularity: Not Supported 00:07:24.315 SQ Associations: Not Supported 00:07:24.315 UUID List: Not Supported 00:07:24.315 Multi-Domain Subsystem: Not Supported 00:07:24.315 Fixed Capacity Management: Not Supported 00:07:24.315 Variable Capacity Management: Not Supported 00:07:24.315 Delete Endurance Group: Not Supported 00:07:24.315 Delete NVM Set: Not Supported 00:07:24.315 Extended LBA Formats Supported: Supported 00:07:24.315 Flexible Data Placement Supported: Not Supported 00:07:24.315 00:07:24.315 Controller Memory Buffer Support 00:07:24.315 ================================ 00:07:24.315 Supported: No 00:07:24.315 00:07:24.315 Persistent Memory Region Support 00:07:24.315 ================================ 00:07:24.315 Supported: No 00:07:24.315 00:07:24.315 Admin Command Set Attributes 00:07:24.315 ============================ 00:07:24.315 Security Send/Receive: Not Supported 00:07:24.315 Format NVM: Supported 00:07:24.315 Firmware Activate/Download: Not Supported 00:07:24.315 Namespace Management: Supported 00:07:24.315 Device Self-Test: Not Supported 00:07:24.315 Directives: Supported 00:07:24.315 NVMe-MI: Not Supported 00:07:24.315 Virtualization Management: Not Supported 00:07:24.315 Doorbell Buffer Config: Supported 00:07:24.315 Get LBA Status Capability: Not Supported 00:07:24.315 Command & Feature Lockdown Capability: Not Supported 00:07:24.315 Abort Command Limit: 4 00:07:24.315 Async Event Request Limit: 4 00:07:24.315 Number of Firmware Slots: N/A 00:07:24.315 Firmware Slot 1 Read-Only: N/A 00:07:24.315 Firmware Activation Without Reset: N/A 00:07:24.315 Multiple Update Detection Support: N/A 00:07:24.315 Firmware Update Granularity: No Information Provided 00:07:24.315 Per-Namespace SMART Log: Yes 00:07:24.315 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.315 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:24.315 Command Effects Log Page: Supported 00:07:24.315 Get Log Page Extended Data: Supported 00:07:24.315 Telemetry Log Pages: Not Supported 00:07:24.315 Persistent Event Log Pages: Not Supported 00:07:24.315 Supported Log Pages Log Page: May Support 00:07:24.315 Commands Supported & Effects Log Page: Not Supported 00:07:24.315 Feature Identifiers & Effects Log Page:May Support 00:07:24.315 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.315 Data Area 4 for Telemetry Log: Not Supported 00:07:24.315 Error Log Page Entries Supported: 1 00:07:24.315 Keep Alive: Not Supported 00:07:24.315 00:07:24.315 NVM Command Set Attributes 00:07:24.315 ========================== 00:07:24.315 Submission Queue Entry Size 00:07:24.315 Max: 64 00:07:24.315 Min: 64 00:07:24.315 Completion Queue Entry Size 00:07:24.315 Max: 16 00:07:24.315 Min: 16 00:07:24.315 Number of Namespaces: 256 00:07:24.315 Compare Command: Supported 00:07:24.315 Write Uncorrectable Command: Not Supported 00:07:24.315 Dataset Management Command: Supported 00:07:24.315 Write Zeroes Command: Supported 00:07:24.315 Set Features Save Field: Supported 00:07:24.315 Reservations: Not Supported 00:07:24.315 Timestamp: Supported 00:07:24.315 Copy: Supported 00:07:24.315 Volatile Write Cache: Present 00:07:24.315 Atomic Write Unit (Normal): 1 00:07:24.315 Atomic Write Unit (PFail): 1 00:07:24.315 Atomic Compare & Write Unit: 1 00:07:24.315 Fused Compare & Write: Not Supported 00:07:24.315 Scatter-Gather List 00:07:24.315 SGL Command Set: Supported 00:07:24.315 SGL Keyed: Not Supported 00:07:24.315 SGL Bit Bucket Descriptor: Not Supported 00:07:24.315 SGL Metadata Pointer: Not Supported 00:07:24.315 Oversized SGL: Not Supported 00:07:24.315 SGL Metadata Address: Not Supported 00:07:24.315 SGL Offset: Not Supported 00:07:24.315 Transport SGL Data Block: Not Supported 00:07:24.315 Replay Protected Memory Block: Not Supported 00:07:24.315 00:07:24.315 Firmware Slot Information 00:07:24.315 ========================= 00:07:24.315 Active slot: 1 00:07:24.315 Slot 1 Firmware Revision: 1.0 00:07:24.315 00:07:24.315 00:07:24.315 Commands Supported and Effects 00:07:24.315 ============================== 00:07:24.315 Admin Commands 00:07:24.315 -------------- 00:07:24.315 Delete I/O Submission Queue (00h): Supported 00:07:24.315 Create I/O Submission Queue (01h): Supported 00:07:24.315 Get Log Page (02h): Supported 00:07:24.315 Delete I/O Completion Queue (04h): Supported 00:07:24.315 Create I/O Completion Queue (05h): Supported 00:07:24.315 Identify (06h): Supported 00:07:24.315 Abort (08h): Supported 00:07:24.315 Set Features (09h): Supported 00:07:24.315 Get Features (0Ah): Supported 00:07:24.315 Asynchronous Event Request (0Ch): Supported 00:07:24.315 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.315 Directive Send (19h): Supported 00:07:24.315 Directive Receive (1Ah): Supported 00:07:24.315 Virtualization Management (1Ch): Supported 00:07:24.315 Doorbell Buffer Config (7Ch): Supported 00:07:24.315 Format NVM (80h): Supported LBA-Change 00:07:24.315 I/O Commands 00:07:24.315 ------------ 00:07:24.315 Flush (00h): Supported LBA-Change 00:07:24.315 Write (01h): Supported LBA-Change 00:07:24.315 Read (02h): Supported 00:07:24.315 Compare (05h): Supported 00:07:24.315 Write Zeroes (08h): Supported LBA-Change 00:07:24.315 Dataset Management (09h): Supported LBA-Change 00:07:24.315 Unknown (0Ch): Supported 00:07:24.315 Unknown (12h): Supported 00:07:24.315 Copy (19h): Supported LBA-Change 00:07:24.316 Unknown (1Dh): Supported LBA-Change 00:07:24.316 00:07:24.316 Error Log 00:07:24.316 ========= 00:07:24.316 00:07:24.316 Arbitration 00:07:24.316 =========== 00:07:24.316 Arbitration Burst: no limit 00:07:24.316 00:07:24.316 Power Management 00:07:24.316 ================ 00:07:24.316 Number of Power States: 1 00:07:24.316 Current Power State: Power State #0 00:07:24.316 Power State #0: 00:07:24.316 Max Power: 25.00 W 00:07:24.316 Non-Operational State: Operational 00:07:24.316 Entry Latency: 16 microseconds 00:07:24.316 Exit Latency: 4 microseconds 00:07:24.316 Relative Read Throughput: 0 00:07:24.316 Relative Read Latency: 0 00:07:24.316 Relative Write Throughput: 0 00:07:24.316 Relative Write Latency: 0 00:07:24.316 Idle Power: Not Reported 00:07:24.316 Active Power: Not Reported 00:07:24.316 Non-Operational Permissive Mode: Not Supported 00:07:24.316 00:07:24.316 Health Information 00:07:24.316 ================== 00:07:24.316 Critical Warnings: 00:07:24.316 Available Spare Space: OK 00:07:24.316 Temperature: OK 00:07:24.316 Device Reliability: OK 00:07:24.316 Read Only: No 00:07:24.316 Volatile Memory Backup: OK 00:07:24.316 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.316 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.316 Available Spare: 0% 00:07:24.316 Available Spare Threshold: 0% 00:07:24.316 Life Percentage Used: 0% 00:07:24.316 Data Units Read: 2096 00:07:24.316 Data Units Written: 1883 00:07:24.316 Host Read Commands: 105354 00:07:24.316 Host Write Commands: 103623 00:07:24.316 Controller Busy Time: 0 minutes 00:07:24.316 Power Cycles: 0 00:07:24.316 Power On Hours: 0 hours 00:07:24.316 Unsafe Shutdowns: 0 00:07:24.316 Unrecoverable Media Errors: 0 00:07:24.316 Lifetime Error Log Entries: 0 00:07:24.316 Warning Temperature Time: 0 minutes 00:07:24.316 Critical Temperature Time: 0 minutes 00:07:24.316 00:07:24.316 Number of Queues 00:07:24.316 ================ 00:07:24.316 Number of I/O Submission Queues: 64 00:07:24.316 Number of I/O Completion Queues: 64 00:07:24.316 00:07:24.316 ZNS Specific Controller Data 00:07:24.316 ============================ 00:07:24.316 Zone Append Size Limit: 0 00:07:24.316 00:07:24.316 00:07:24.316 Active Namespaces 00:07:24.316 ================= 00:07:24.316 Namespace ID:1 00:07:24.316 Error Recovery Timeout: Unlimited 00:07:24.316 Command Set Identifier: NVM (00h) 00:07:24.316 Deallocate: Supported 00:07:24.316 Deallocated/Unwritten Error: Supported 00:07:24.316 Deallocated Read Value: All 0x00 00:07:24.316 Deallocate in Write Zeroes: Not Supported 00:07:24.316 Deallocated Guard Field: 0xFFFF 00:07:24.316 Flush: Supported 00:07:24.316 Reservation: Not Supported 00:07:24.316 Namespace Sharing Capabilities: Private 00:07:24.316 Size (in LBAs): 1048576 (4GiB) 00:07:24.316 Capacity (in LBAs): 1048576 (4GiB) 00:07:24.316 Utilization (in LBAs): 1048576 (4GiB) 00:07:24.316 Thin Provisioning: Not Supported 00:07:24.316 Per-NS Atomic Units: No 00:07:24.316 Maximum Single Source Range Length: 128 00:07:24.316 Maximum Copy Length: 128 00:07:24.316 Maximum Source Range Count: 128 00:07:24.316 NGUID/EUI64 Never Reused: No 00:07:24.316 Namespace Write Protected: No 00:07:24.316 Number of LBA Formats: 8 00:07:24.316 Current LBA Format: LBA Format #04 00:07:24.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.316 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.316 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.316 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.316 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.316 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.316 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.316 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.316 00:07:24.316 NVM Specific Namespace Data 00:07:24.316 =========================== 00:07:24.316 Logical Block Storage Tag Mask: 0 00:07:24.316 Protection Information Capabilities: 00:07:24.316 16b Guard Protection Information Storage Tag Support: No 00:07:24.316 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.316 Storage Tag Check Read Support: No 00:07:24.316 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Namespace ID:2 00:07:24.316 Error Recovery Timeout: Unlimited 00:07:24.316 Command Set Identifier: NVM (00h) 00:07:24.316 Deallocate: Supported 00:07:24.316 Deallocated/Unwritten Error: Supported 00:07:24.316 Deallocated Read Value: All 0x00 00:07:24.316 Deallocate in Write Zeroes: Not Supported 00:07:24.316 Deallocated Guard Field: 0xFFFF 00:07:24.316 Flush: Supported 00:07:24.316 Reservation: Not Supported 00:07:24.316 Namespace Sharing Capabilities: Private 00:07:24.316 Size (in LBAs): 1048576 (4GiB) 00:07:24.316 Capacity (in LBAs): 1048576 (4GiB) 00:07:24.316 Utilization (in LBAs): 1048576 (4GiB) 00:07:24.316 Thin Provisioning: Not Supported 00:07:24.316 Per-NS Atomic Units: No 00:07:24.316 Maximum Single Source Range Length: 128 00:07:24.316 Maximum Copy Length: 128 00:07:24.316 Maximum Source Range Count: 128 00:07:24.316 NGUID/EUI64 Never Reused: No 00:07:24.316 Namespace Write Protected: No 00:07:24.316 Number of LBA Formats: 8 00:07:24.316 Current LBA Format: LBA Format #04 00:07:24.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.316 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.316 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.316 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.316 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.316 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.316 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.316 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.316 00:07:24.316 NVM Specific Namespace Data 00:07:24.316 =========================== 00:07:24.316 Logical Block Storage Tag Mask: 0 00:07:24.316 Protection Information Capabilities: 00:07:24.316 16b Guard Protection Information Storage Tag Support: No 00:07:24.316 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.316 Storage Tag Check Read Support: No 00:07:24.316 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.316 Namespace ID:3 00:07:24.316 Error Recovery Timeout: Unlimited 00:07:24.316 Command Set Identifier: NVM (00h) 00:07:24.316 Deallocate: Supported 00:07:24.316 Deallocated/Unwritten Error: Supported 00:07:24.316 Deallocated Read Value: All 0x00 00:07:24.316 Deallocate in Write Zeroes: Not Supported 00:07:24.316 Deallocated Guard Field: 0xFFFF 00:07:24.316 Flush: Supported 00:07:24.316 Reservation: Not Supported 00:07:24.316 Namespace Sharing Capabilities: Private 00:07:24.316 Size (in LBAs): 1048576 (4GiB) 00:07:24.316 Capacity (in LBAs): 1048576 (4GiB) 00:07:24.316 Utilization (in LBAs): 1048576 (4GiB) 00:07:24.316 Thin Provisioning: Not Supported 00:07:24.316 Per-NS Atomic Units: No 00:07:24.316 Maximum Single Source Range Length: 128 00:07:24.316 Maximum Copy Length: 128 00:07:24.316 Maximum Source Range Count: 128 00:07:24.316 NGUID/EUI64 Never Reused: No 00:07:24.316 Namespace Write Protected: No 00:07:24.316 Number of LBA Formats: 8 00:07:24.316 Current LBA Format: LBA Format #04 00:07:24.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.316 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.316 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.316 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.316 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.316 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.316 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.316 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.316 00:07:24.316 NVM Specific Namespace Data 00:07:24.316 =========================== 00:07:24.316 Logical Block Storage Tag Mask: 0 00:07:24.317 Protection Information Capabilities: 00:07:24.317 16b Guard Protection Information Storage Tag Support: No 00:07:24.317 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.317 Storage Tag Check Read Support: No 00:07:24.317 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.317 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:24.317 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:24.618 ===================================================== 00:07:24.618 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:24.618 ===================================================== 00:07:24.618 Controller Capabilities/Features 00:07:24.618 ================================ 00:07:24.618 Vendor ID: 1b36 00:07:24.618 Subsystem Vendor ID: 1af4 00:07:24.618 Serial Number: 12340 00:07:24.618 Model Number: QEMU NVMe Ctrl 00:07:24.618 Firmware Version: 8.0.0 00:07:24.618 Recommended Arb Burst: 6 00:07:24.618 IEEE OUI Identifier: 00 54 52 00:07:24.618 Multi-path I/O 00:07:24.618 May have multiple subsystem ports: No 00:07:24.618 May have multiple controllers: No 00:07:24.618 Associated with SR-IOV VF: No 00:07:24.618 Max Data Transfer Size: 524288 00:07:24.618 Max Number of Namespaces: 256 00:07:24.618 Max Number of I/O Queues: 64 00:07:24.618 NVMe Specification Version (VS): 1.4 00:07:24.618 NVMe Specification Version (Identify): 1.4 00:07:24.618 Maximum Queue Entries: 2048 00:07:24.618 Contiguous Queues Required: Yes 00:07:24.618 Arbitration Mechanisms Supported 00:07:24.618 Weighted Round Robin: Not Supported 00:07:24.618 Vendor Specific: Not Supported 00:07:24.618 Reset Timeout: 7500 ms 00:07:24.618 Doorbell Stride: 4 bytes 00:07:24.618 NVM Subsystem Reset: Not Supported 00:07:24.618 Command Sets Supported 00:07:24.618 NVM Command Set: Supported 00:07:24.618 Boot Partition: Not Supported 00:07:24.618 Memory Page Size Minimum: 4096 bytes 00:07:24.618 Memory Page Size Maximum: 65536 bytes 00:07:24.618 Persistent Memory Region: Not Supported 00:07:24.618 Optional Asynchronous Events Supported 00:07:24.618 Namespace Attribute Notices: Supported 00:07:24.618 Firmware Activation Notices: Not Supported 00:07:24.618 ANA Change Notices: Not Supported 00:07:24.618 PLE Aggregate Log Change Notices: Not Supported 00:07:24.618 LBA Status Info Alert Notices: Not Supported 00:07:24.618 EGE Aggregate Log Change Notices: Not Supported 00:07:24.618 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.618 Zone Descriptor Change Notices: Not Supported 00:07:24.618 Discovery Log Change Notices: Not Supported 00:07:24.618 Controller Attributes 00:07:24.618 128-bit Host Identifier: Not Supported 00:07:24.618 Non-Operational Permissive Mode: Not Supported 00:07:24.618 NVM Sets: Not Supported 00:07:24.618 Read Recovery Levels: Not Supported 00:07:24.618 Endurance Groups: Not Supported 00:07:24.618 Predictable Latency Mode: Not Supported 00:07:24.618 Traffic Based Keep ALive: Not Supported 00:07:24.618 Namespace Granularity: Not Supported 00:07:24.618 SQ Associations: Not Supported 00:07:24.618 UUID List: Not Supported 00:07:24.618 Multi-Domain Subsystem: Not Supported 00:07:24.618 Fixed Capacity Management: Not Supported 00:07:24.618 Variable Capacity Management: Not Supported 00:07:24.618 Delete Endurance Group: Not Supported 00:07:24.618 Delete NVM Set: Not Supported 00:07:24.618 Extended LBA Formats Supported: Supported 00:07:24.618 Flexible Data Placement Supported: Not Supported 00:07:24.618 00:07:24.618 Controller Memory Buffer Support 00:07:24.618 ================================ 00:07:24.618 Supported: No 00:07:24.618 00:07:24.618 Persistent Memory Region Support 00:07:24.618 ================================ 00:07:24.618 Supported: No 00:07:24.618 00:07:24.618 Admin Command Set Attributes 00:07:24.618 ============================ 00:07:24.618 Security Send/Receive: Not Supported 00:07:24.618 Format NVM: Supported 00:07:24.618 Firmware Activate/Download: Not Supported 00:07:24.618 Namespace Management: Supported 00:07:24.618 Device Self-Test: Not Supported 00:07:24.618 Directives: Supported 00:07:24.618 NVMe-MI: Not Supported 00:07:24.618 Virtualization Management: Not Supported 00:07:24.618 Doorbell Buffer Config: Supported 00:07:24.618 Get LBA Status Capability: Not Supported 00:07:24.618 Command & Feature Lockdown Capability: Not Supported 00:07:24.618 Abort Command Limit: 4 00:07:24.618 Async Event Request Limit: 4 00:07:24.618 Number of Firmware Slots: N/A 00:07:24.618 Firmware Slot 1 Read-Only: N/A 00:07:24.618 Firmware Activation Without Reset: N/A 00:07:24.618 Multiple Update Detection Support: N/A 00:07:24.618 Firmware Update Granularity: No Information Provided 00:07:24.618 Per-Namespace SMART Log: Yes 00:07:24.618 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.618 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:24.618 Command Effects Log Page: Supported 00:07:24.618 Get Log Page Extended Data: Supported 00:07:24.618 Telemetry Log Pages: Not Supported 00:07:24.618 Persistent Event Log Pages: Not Supported 00:07:24.618 Supported Log Pages Log Page: May Support 00:07:24.618 Commands Supported & Effects Log Page: Not Supported 00:07:24.618 Feature Identifiers & Effects Log Page:May Support 00:07:24.618 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.618 Data Area 4 for Telemetry Log: Not Supported 00:07:24.618 Error Log Page Entries Supported: 1 00:07:24.618 Keep Alive: Not Supported 00:07:24.618 00:07:24.618 NVM Command Set Attributes 00:07:24.618 ========================== 00:07:24.618 Submission Queue Entry Size 00:07:24.619 Max: 64 00:07:24.619 Min: 64 00:07:24.619 Completion Queue Entry Size 00:07:24.619 Max: 16 00:07:24.619 Min: 16 00:07:24.619 Number of Namespaces: 256 00:07:24.619 Compare Command: Supported 00:07:24.619 Write Uncorrectable Command: Not Supported 00:07:24.619 Dataset Management Command: Supported 00:07:24.619 Write Zeroes Command: Supported 00:07:24.619 Set Features Save Field: Supported 00:07:24.619 Reservations: Not Supported 00:07:24.619 Timestamp: Supported 00:07:24.619 Copy: Supported 00:07:24.619 Volatile Write Cache: Present 00:07:24.619 Atomic Write Unit (Normal): 1 00:07:24.619 Atomic Write Unit (PFail): 1 00:07:24.619 Atomic Compare & Write Unit: 1 00:07:24.619 Fused Compare & Write: Not Supported 00:07:24.619 Scatter-Gather List 00:07:24.619 SGL Command Set: Supported 00:07:24.619 SGL Keyed: Not Supported 00:07:24.619 SGL Bit Bucket Descriptor: Not Supported 00:07:24.619 SGL Metadata Pointer: Not Supported 00:07:24.619 Oversized SGL: Not Supported 00:07:24.619 SGL Metadata Address: Not Supported 00:07:24.619 SGL Offset: Not Supported 00:07:24.619 Transport SGL Data Block: Not Supported 00:07:24.619 Replay Protected Memory Block: Not Supported 00:07:24.619 00:07:24.619 Firmware Slot Information 00:07:24.619 ========================= 00:07:24.619 Active slot: 1 00:07:24.619 Slot 1 Firmware Revision: 1.0 00:07:24.619 00:07:24.619 00:07:24.619 Commands Supported and Effects 00:07:24.619 ============================== 00:07:24.619 Admin Commands 00:07:24.619 -------------- 00:07:24.619 Delete I/O Submission Queue (00h): Supported 00:07:24.619 Create I/O Submission Queue (01h): Supported 00:07:24.619 Get Log Page (02h): Supported 00:07:24.619 Delete I/O Completion Queue (04h): Supported 00:07:24.619 Create I/O Completion Queue (05h): Supported 00:07:24.619 Identify (06h): Supported 00:07:24.619 Abort (08h): Supported 00:07:24.619 Set Features (09h): Supported 00:07:24.619 Get Features (0Ah): Supported 00:07:24.619 Asynchronous Event Request (0Ch): Supported 00:07:24.619 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.619 Directive Send (19h): Supported 00:07:24.619 Directive Receive (1Ah): Supported 00:07:24.619 Virtualization Management (1Ch): Supported 00:07:24.619 Doorbell Buffer Config (7Ch): Supported 00:07:24.619 Format NVM (80h): Supported LBA-Change 00:07:24.619 I/O Commands 00:07:24.619 ------------ 00:07:24.619 Flush (00h): Supported LBA-Change 00:07:24.619 Write (01h): Supported LBA-Change 00:07:24.619 Read (02h): Supported 00:07:24.619 Compare (05h): Supported 00:07:24.619 Write Zeroes (08h): Supported LBA-Change 00:07:24.619 Dataset Management (09h): Supported LBA-Change 00:07:24.619 Unknown (0Ch): Supported 00:07:24.619 Unknown (12h): Supported 00:07:24.619 Copy (19h): Supported LBA-Change 00:07:24.619 Unknown (1Dh): Supported LBA-Change 00:07:24.619 00:07:24.619 Error Log 00:07:24.619 ========= 00:07:24.619 00:07:24.619 Arbitration 00:07:24.619 =========== 00:07:24.619 Arbitration Burst: no limit 00:07:24.619 00:07:24.619 Power Management 00:07:24.619 ================ 00:07:24.619 Number of Power States: 1 00:07:24.619 Current Power State: Power State #0 00:07:24.619 Power State #0: 00:07:24.619 Max Power: 25.00 W 00:07:24.619 Non-Operational State: Operational 00:07:24.619 Entry Latency: 16 microseconds 00:07:24.619 Exit Latency: 4 microseconds 00:07:24.619 Relative Read Throughput: 0 00:07:24.619 Relative Read Latency: 0 00:07:24.619 Relative Write Throughput: 0 00:07:24.619 Relative Write Latency: 0 00:07:24.619 Idle Power: Not Reported 00:07:24.619 Active Power: Not Reported 00:07:24.619 Non-Operational Permissive Mode: Not Supported 00:07:24.619 00:07:24.619 Health Information 00:07:24.619 ================== 00:07:24.619 Critical Warnings: 00:07:24.619 Available Spare Space: OK 00:07:24.619 Temperature: OK 00:07:24.619 Device Reliability: OK 00:07:24.619 Read Only: No 00:07:24.619 Volatile Memory Backup: OK 00:07:24.619 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.619 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.619 Available Spare: 0% 00:07:24.619 Available Spare Threshold: 0% 00:07:24.619 Life Percentage Used: 0% 00:07:24.619 Data Units Read: 654 00:07:24.619 Data Units Written: 582 00:07:24.619 Host Read Commands: 34650 00:07:24.619 Host Write Commands: 34436 00:07:24.619 Controller Busy Time: 0 minutes 00:07:24.619 Power Cycles: 0 00:07:24.619 Power On Hours: 0 hours 00:07:24.619 Unsafe Shutdowns: 0 00:07:24.619 Unrecoverable Media Errors: 0 00:07:24.619 Lifetime Error Log Entries: 0 00:07:24.619 Warning Temperature Time: 0 minutes 00:07:24.619 Critical Temperature Time: 0 minutes 00:07:24.619 00:07:24.619 Number of Queues 00:07:24.619 ================ 00:07:24.619 Number of I/O Submission Queues: 64 00:07:24.619 Number of I/O Completion Queues: 64 00:07:24.619 00:07:24.619 ZNS Specific Controller Data 00:07:24.619 ============================ 00:07:24.619 Zone Append Size Limit: 0 00:07:24.619 00:07:24.619 00:07:24.619 Active Namespaces 00:07:24.619 ================= 00:07:24.619 Namespace ID:1 00:07:24.619 Error Recovery Timeout: Unlimited 00:07:24.619 Command Set Identifier: NVM (00h) 00:07:24.619 Deallocate: Supported 00:07:24.619 Deallocated/Unwritten Error: Supported 00:07:24.619 Deallocated Read Value: All 0x00 00:07:24.619 Deallocate in Write Zeroes: Not Supported 00:07:24.619 Deallocated Guard Field: 0xFFFF 00:07:24.619 Flush: Supported 00:07:24.619 Reservation: Not Supported 00:07:24.619 Metadata Transferred as: Separate Metadata Buffer 00:07:24.619 Namespace Sharing Capabilities: Private 00:07:24.619 Size (in LBAs): 1548666 (5GiB) 00:07:24.619 Capacity (in LBAs): 1548666 (5GiB) 00:07:24.619 Utilization (in LBAs): 1548666 (5GiB) 00:07:24.619 Thin Provisioning: Not Supported 00:07:24.619 Per-NS Atomic Units: No 00:07:24.619 Maximum Single Source Range Length: 128 00:07:24.619 Maximum Copy Length: 128 00:07:24.619 Maximum Source Range Count: 128 00:07:24.619 NGUID/EUI64 Never Reused: No 00:07:24.619 Namespace Write Protected: No 00:07:24.619 Number of LBA Formats: 8 00:07:24.619 Current LBA Format: LBA Format #07 00:07:24.619 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.619 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.619 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.619 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.619 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.619 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.619 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.619 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.619 00:07:24.619 NVM Specific Namespace Data 00:07:24.619 =========================== 00:07:24.619 Logical Block Storage Tag Mask: 0 00:07:24.619 Protection Information Capabilities: 00:07:24.619 16b Guard Protection Information Storage Tag Support: No 00:07:24.619 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.619 Storage Tag Check Read Support: No 00:07:24.619 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.619 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:24.619 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:24.882 ===================================================== 00:07:24.882 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:24.882 ===================================================== 00:07:24.882 Controller Capabilities/Features 00:07:24.882 ================================ 00:07:24.882 Vendor ID: 1b36 00:07:24.882 Subsystem Vendor ID: 1af4 00:07:24.882 Serial Number: 12341 00:07:24.882 Model Number: QEMU NVMe Ctrl 00:07:24.882 Firmware Version: 8.0.0 00:07:24.882 Recommended Arb Burst: 6 00:07:24.882 IEEE OUI Identifier: 00 54 52 00:07:24.882 Multi-path I/O 00:07:24.882 May have multiple subsystem ports: No 00:07:24.882 May have multiple controllers: No 00:07:24.882 Associated with SR-IOV VF: No 00:07:24.882 Max Data Transfer Size: 524288 00:07:24.882 Max Number of Namespaces: 256 00:07:24.882 Max Number of I/O Queues: 64 00:07:24.882 NVMe Specification Version (VS): 1.4 00:07:24.882 NVMe Specification Version (Identify): 1.4 00:07:24.882 Maximum Queue Entries: 2048 00:07:24.882 Contiguous Queues Required: Yes 00:07:24.882 Arbitration Mechanisms Supported 00:07:24.882 Weighted Round Robin: Not Supported 00:07:24.882 Vendor Specific: Not Supported 00:07:24.882 Reset Timeout: 7500 ms 00:07:24.882 Doorbell Stride: 4 bytes 00:07:24.882 NVM Subsystem Reset: Not Supported 00:07:24.882 Command Sets Supported 00:07:24.882 NVM Command Set: Supported 00:07:24.882 Boot Partition: Not Supported 00:07:24.882 Memory Page Size Minimum: 4096 bytes 00:07:24.882 Memory Page Size Maximum: 65536 bytes 00:07:24.882 Persistent Memory Region: Not Supported 00:07:24.882 Optional Asynchronous Events Supported 00:07:24.882 Namespace Attribute Notices: Supported 00:07:24.882 Firmware Activation Notices: Not Supported 00:07:24.882 ANA Change Notices: Not Supported 00:07:24.882 PLE Aggregate Log Change Notices: Not Supported 00:07:24.882 LBA Status Info Alert Notices: Not Supported 00:07:24.882 EGE Aggregate Log Change Notices: Not Supported 00:07:24.882 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.882 Zone Descriptor Change Notices: Not Supported 00:07:24.882 Discovery Log Change Notices: Not Supported 00:07:24.882 Controller Attributes 00:07:24.882 128-bit Host Identifier: Not Supported 00:07:24.882 Non-Operational Permissive Mode: Not Supported 00:07:24.882 NVM Sets: Not Supported 00:07:24.882 Read Recovery Levels: Not Supported 00:07:24.882 Endurance Groups: Not Supported 00:07:24.882 Predictable Latency Mode: Not Supported 00:07:24.882 Traffic Based Keep ALive: Not Supported 00:07:24.882 Namespace Granularity: Not Supported 00:07:24.882 SQ Associations: Not Supported 00:07:24.882 UUID List: Not Supported 00:07:24.882 Multi-Domain Subsystem: Not Supported 00:07:24.882 Fixed Capacity Management: Not Supported 00:07:24.882 Variable Capacity Management: Not Supported 00:07:24.882 Delete Endurance Group: Not Supported 00:07:24.882 Delete NVM Set: Not Supported 00:07:24.882 Extended LBA Formats Supported: Supported 00:07:24.882 Flexible Data Placement Supported: Not Supported 00:07:24.882 00:07:24.882 Controller Memory Buffer Support 00:07:24.882 ================================ 00:07:24.882 Supported: No 00:07:24.882 00:07:24.882 Persistent Memory Region Support 00:07:24.882 ================================ 00:07:24.882 Supported: No 00:07:24.882 00:07:24.882 Admin Command Set Attributes 00:07:24.882 ============================ 00:07:24.882 Security Send/Receive: Not Supported 00:07:24.882 Format NVM: Supported 00:07:24.882 Firmware Activate/Download: Not Supported 00:07:24.882 Namespace Management: Supported 00:07:24.882 Device Self-Test: Not Supported 00:07:24.882 Directives: Supported 00:07:24.882 NVMe-MI: Not Supported 00:07:24.882 Virtualization Management: Not Supported 00:07:24.882 Doorbell Buffer Config: Supported 00:07:24.882 Get LBA Status Capability: Not Supported 00:07:24.882 Command & Feature Lockdown Capability: Not Supported 00:07:24.882 Abort Command Limit: 4 00:07:24.882 Async Event Request Limit: 4 00:07:24.882 Number of Firmware Slots: N/A 00:07:24.882 Firmware Slot 1 Read-Only: N/A 00:07:24.882 Firmware Activation Without Reset: N/A 00:07:24.882 Multiple Update Detection Support: N/A 00:07:24.882 Firmware Update Granularity: No Information Provided 00:07:24.882 Per-Namespace SMART Log: Yes 00:07:24.882 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.882 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:24.882 Command Effects Log Page: Supported 00:07:24.882 Get Log Page Extended Data: Supported 00:07:24.882 Telemetry Log Pages: Not Supported 00:07:24.882 Persistent Event Log Pages: Not Supported 00:07:24.882 Supported Log Pages Log Page: May Support 00:07:24.882 Commands Supported & Effects Log Page: Not Supported 00:07:24.882 Feature Identifiers & Effects Log Page:May Support 00:07:24.882 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.882 Data Area 4 for Telemetry Log: Not Supported 00:07:24.882 Error Log Page Entries Supported: 1 00:07:24.882 Keep Alive: Not Supported 00:07:24.882 00:07:24.882 NVM Command Set Attributes 00:07:24.882 ========================== 00:07:24.882 Submission Queue Entry Size 00:07:24.882 Max: 64 00:07:24.882 Min: 64 00:07:24.882 Completion Queue Entry Size 00:07:24.882 Max: 16 00:07:24.882 Min: 16 00:07:24.882 Number of Namespaces: 256 00:07:24.882 Compare Command: Supported 00:07:24.882 Write Uncorrectable Command: Not Supported 00:07:24.882 Dataset Management Command: Supported 00:07:24.882 Write Zeroes Command: Supported 00:07:24.882 Set Features Save Field: Supported 00:07:24.882 Reservations: Not Supported 00:07:24.882 Timestamp: Supported 00:07:24.882 Copy: Supported 00:07:24.882 Volatile Write Cache: Present 00:07:24.882 Atomic Write Unit (Normal): 1 00:07:24.882 Atomic Write Unit (PFail): 1 00:07:24.882 Atomic Compare & Write Unit: 1 00:07:24.882 Fused Compare & Write: Not Supported 00:07:24.882 Scatter-Gather List 00:07:24.882 SGL Command Set: Supported 00:07:24.882 SGL Keyed: Not Supported 00:07:24.882 SGL Bit Bucket Descriptor: Not Supported 00:07:24.882 SGL Metadata Pointer: Not Supported 00:07:24.882 Oversized SGL: Not Supported 00:07:24.882 SGL Metadata Address: Not Supported 00:07:24.882 SGL Offset: Not Supported 00:07:24.882 Transport SGL Data Block: Not Supported 00:07:24.882 Replay Protected Memory Block: Not Supported 00:07:24.882 00:07:24.882 Firmware Slot Information 00:07:24.882 ========================= 00:07:24.882 Active slot: 1 00:07:24.882 Slot 1 Firmware Revision: 1.0 00:07:24.882 00:07:24.882 00:07:24.882 Commands Supported and Effects 00:07:24.882 ============================== 00:07:24.882 Admin Commands 00:07:24.882 -------------- 00:07:24.882 Delete I/O Submission Queue (00h): Supported 00:07:24.882 Create I/O Submission Queue (01h): Supported 00:07:24.882 Get Log Page (02h): Supported 00:07:24.882 Delete I/O Completion Queue (04h): Supported 00:07:24.882 Create I/O Completion Queue (05h): Supported 00:07:24.882 Identify (06h): Supported 00:07:24.882 Abort (08h): Supported 00:07:24.882 Set Features (09h): Supported 00:07:24.882 Get Features (0Ah): Supported 00:07:24.882 Asynchronous Event Request (0Ch): Supported 00:07:24.882 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.882 Directive Send (19h): Supported 00:07:24.883 Directive Receive (1Ah): Supported 00:07:24.883 Virtualization Management (1Ch): Supported 00:07:24.883 Doorbell Buffer Config (7Ch): Supported 00:07:24.883 Format NVM (80h): Supported LBA-Change 00:07:24.883 I/O Commands 00:07:24.883 ------------ 00:07:24.883 Flush (00h): Supported LBA-Change 00:07:24.883 Write (01h): Supported LBA-Change 00:07:24.883 Read (02h): Supported 00:07:24.883 Compare (05h): Supported 00:07:24.883 Write Zeroes (08h): Supported LBA-Change 00:07:24.883 Dataset Management (09h): Supported LBA-Change 00:07:24.883 Unknown (0Ch): Supported 00:07:24.883 Unknown (12h): Supported 00:07:24.883 Copy (19h): Supported LBA-Change 00:07:24.883 Unknown (1Dh): Supported LBA-Change 00:07:24.883 00:07:24.883 Error Log 00:07:24.883 ========= 00:07:24.883 00:07:24.883 Arbitration 00:07:24.883 =========== 00:07:24.883 Arbitration Burst: no limit 00:07:24.883 00:07:24.883 Power Management 00:07:24.883 ================ 00:07:24.883 Number of Power States: 1 00:07:24.883 Current Power State: Power State #0 00:07:24.883 Power State #0: 00:07:24.883 Max Power: 25.00 W 00:07:24.883 Non-Operational State: Operational 00:07:24.883 Entry Latency: 16 microseconds 00:07:24.883 Exit Latency: 4 microseconds 00:07:24.883 Relative Read Throughput: 0 00:07:24.883 Relative Read Latency: 0 00:07:24.883 Relative Write Throughput: 0 00:07:24.883 Relative Write Latency: 0 00:07:24.883 Idle Power: Not Reported 00:07:24.883 Active Power: Not Reported 00:07:24.883 Non-Operational Permissive Mode: Not Supported 00:07:24.883 00:07:24.883 Health Information 00:07:24.883 ================== 00:07:24.883 Critical Warnings: 00:07:24.883 Available Spare Space: OK 00:07:24.883 Temperature: OK 00:07:24.883 Device Reliability: OK 00:07:24.883 Read Only: No 00:07:24.883 Volatile Memory Backup: OK 00:07:24.883 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.883 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.883 Available Spare: 0% 00:07:24.883 Available Spare Threshold: 0% 00:07:24.883 Life Percentage Used: 0% 00:07:24.883 Data Units Read: 1007 00:07:24.883 Data Units Written: 868 00:07:24.883 Host Read Commands: 50979 00:07:24.883 Host Write Commands: 49689 00:07:24.883 Controller Busy Time: 0 minutes 00:07:24.883 Power Cycles: 0 00:07:24.883 Power On Hours: 0 hours 00:07:24.883 Unsafe Shutdowns: 0 00:07:24.883 Unrecoverable Media Errors: 0 00:07:24.883 Lifetime Error Log Entries: 0 00:07:24.883 Warning Temperature Time: 0 minutes 00:07:24.883 Critical Temperature Time: 0 minutes 00:07:24.883 00:07:24.883 Number of Queues 00:07:24.883 ================ 00:07:24.883 Number of I/O Submission Queues: 64 00:07:24.883 Number of I/O Completion Queues: 64 00:07:24.883 00:07:24.883 ZNS Specific Controller Data 00:07:24.883 ============================ 00:07:24.883 Zone Append Size Limit: 0 00:07:24.883 00:07:24.883 00:07:24.883 Active Namespaces 00:07:24.883 ================= 00:07:24.883 Namespace ID:1 00:07:24.883 Error Recovery Timeout: Unlimited 00:07:24.883 Command Set Identifier: NVM (00h) 00:07:24.883 Deallocate: Supported 00:07:24.883 Deallocated/Unwritten Error: Supported 00:07:24.883 Deallocated Read Value: All 0x00 00:07:24.883 Deallocate in Write Zeroes: Not Supported 00:07:24.883 Deallocated Guard Field: 0xFFFF 00:07:24.883 Flush: Supported 00:07:24.883 Reservation: Not Supported 00:07:24.883 Namespace Sharing Capabilities: Private 00:07:24.883 Size (in LBAs): 1310720 (5GiB) 00:07:24.883 Capacity (in LBAs): 1310720 (5GiB) 00:07:24.883 Utilization (in LBAs): 1310720 (5GiB) 00:07:24.883 Thin Provisioning: Not Supported 00:07:24.883 Per-NS Atomic Units: No 00:07:24.883 Maximum Single Source Range Length: 128 00:07:24.883 Maximum Copy Length: 128 00:07:24.883 Maximum Source Range Count: 128 00:07:24.883 NGUID/EUI64 Never Reused: No 00:07:24.883 Namespace Write Protected: No 00:07:24.883 Number of LBA Formats: 8 00:07:24.883 Current LBA Format: LBA Format #04 00:07:24.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.883 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.883 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.883 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.883 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.883 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.883 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.883 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.883 00:07:24.883 NVM Specific Namespace Data 00:07:24.883 =========================== 00:07:24.883 Logical Block Storage Tag Mask: 0 00:07:24.883 Protection Information Capabilities: 00:07:24.883 16b Guard Protection Information Storage Tag Support: No 00:07:24.883 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.883 Storage Tag Check Read Support: No 00:07:24.883 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.883 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:24.883 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:25.145 ===================================================== 00:07:25.145 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:25.145 ===================================================== 00:07:25.145 Controller Capabilities/Features 00:07:25.145 ================================ 00:07:25.145 Vendor ID: 1b36 00:07:25.145 Subsystem Vendor ID: 1af4 00:07:25.145 Serial Number: 12342 00:07:25.145 Model Number: QEMU NVMe Ctrl 00:07:25.145 Firmware Version: 8.0.0 00:07:25.145 Recommended Arb Burst: 6 00:07:25.145 IEEE OUI Identifier: 00 54 52 00:07:25.145 Multi-path I/O 00:07:25.145 May have multiple subsystem ports: No 00:07:25.145 May have multiple controllers: No 00:07:25.145 Associated with SR-IOV VF: No 00:07:25.145 Max Data Transfer Size: 524288 00:07:25.145 Max Number of Namespaces: 256 00:07:25.145 Max Number of I/O Queues: 64 00:07:25.145 NVMe Specification Version (VS): 1.4 00:07:25.145 NVMe Specification Version (Identify): 1.4 00:07:25.145 Maximum Queue Entries: 2048 00:07:25.145 Contiguous Queues Required: Yes 00:07:25.145 Arbitration Mechanisms Supported 00:07:25.145 Weighted Round Robin: Not Supported 00:07:25.145 Vendor Specific: Not Supported 00:07:25.145 Reset Timeout: 7500 ms 00:07:25.145 Doorbell Stride: 4 bytes 00:07:25.145 NVM Subsystem Reset: Not Supported 00:07:25.145 Command Sets Supported 00:07:25.145 NVM Command Set: Supported 00:07:25.145 Boot Partition: Not Supported 00:07:25.145 Memory Page Size Minimum: 4096 bytes 00:07:25.145 Memory Page Size Maximum: 65536 bytes 00:07:25.145 Persistent Memory Region: Not Supported 00:07:25.145 Optional Asynchronous Events Supported 00:07:25.145 Namespace Attribute Notices: Supported 00:07:25.145 Firmware Activation Notices: Not Supported 00:07:25.145 ANA Change Notices: Not Supported 00:07:25.145 PLE Aggregate Log Change Notices: Not Supported 00:07:25.145 LBA Status Info Alert Notices: Not Supported 00:07:25.145 EGE Aggregate Log Change Notices: Not Supported 00:07:25.145 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.145 Zone Descriptor Change Notices: Not Supported 00:07:25.145 Discovery Log Change Notices: Not Supported 00:07:25.145 Controller Attributes 00:07:25.146 128-bit Host Identifier: Not Supported 00:07:25.146 Non-Operational Permissive Mode: Not Supported 00:07:25.146 NVM Sets: Not Supported 00:07:25.146 Read Recovery Levels: Not Supported 00:07:25.146 Endurance Groups: Not Supported 00:07:25.146 Predictable Latency Mode: Not Supported 00:07:25.146 Traffic Based Keep ALive: Not Supported 00:07:25.146 Namespace Granularity: Not Supported 00:07:25.146 SQ Associations: Not Supported 00:07:25.146 UUID List: Not Supported 00:07:25.146 Multi-Domain Subsystem: Not Supported 00:07:25.146 Fixed Capacity Management: Not Supported 00:07:25.146 Variable Capacity Management: Not Supported 00:07:25.146 Delete Endurance Group: Not Supported 00:07:25.146 Delete NVM Set: Not Supported 00:07:25.146 Extended LBA Formats Supported: Supported 00:07:25.146 Flexible Data Placement Supported: Not Supported 00:07:25.146 00:07:25.146 Controller Memory Buffer Support 00:07:25.146 ================================ 00:07:25.146 Supported: No 00:07:25.146 00:07:25.146 Persistent Memory Region Support 00:07:25.146 ================================ 00:07:25.146 Supported: No 00:07:25.146 00:07:25.146 Admin Command Set Attributes 00:07:25.146 ============================ 00:07:25.146 Security Send/Receive: Not Supported 00:07:25.146 Format NVM: Supported 00:07:25.146 Firmware Activate/Download: Not Supported 00:07:25.146 Namespace Management: Supported 00:07:25.146 Device Self-Test: Not Supported 00:07:25.146 Directives: Supported 00:07:25.146 NVMe-MI: Not Supported 00:07:25.146 Virtualization Management: Not Supported 00:07:25.146 Doorbell Buffer Config: Supported 00:07:25.146 Get LBA Status Capability: Not Supported 00:07:25.146 Command & Feature Lockdown Capability: Not Supported 00:07:25.146 Abort Command Limit: 4 00:07:25.146 Async Event Request Limit: 4 00:07:25.146 Number of Firmware Slots: N/A 00:07:25.146 Firmware Slot 1 Read-Only: N/A 00:07:25.146 Firmware Activation Without Reset: N/A 00:07:25.146 Multiple Update Detection Support: N/A 00:07:25.146 Firmware Update Granularity: No Information Provided 00:07:25.146 Per-Namespace SMART Log: Yes 00:07:25.146 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.146 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:25.146 Command Effects Log Page: Supported 00:07:25.146 Get Log Page Extended Data: Supported 00:07:25.146 Telemetry Log Pages: Not Supported 00:07:25.146 Persistent Event Log Pages: Not Supported 00:07:25.146 Supported Log Pages Log Page: May Support 00:07:25.146 Commands Supported & Effects Log Page: Not Supported 00:07:25.146 Feature Identifiers & Effects Log Page:May Support 00:07:25.146 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.146 Data Area 4 for Telemetry Log: Not Supported 00:07:25.146 Error Log Page Entries Supported: 1 00:07:25.146 Keep Alive: Not Supported 00:07:25.146 00:07:25.146 NVM Command Set Attributes 00:07:25.146 ========================== 00:07:25.146 Submission Queue Entry Size 00:07:25.146 Max: 64 00:07:25.146 Min: 64 00:07:25.146 Completion Queue Entry Size 00:07:25.146 Max: 16 00:07:25.146 Min: 16 00:07:25.146 Number of Namespaces: 256 00:07:25.146 Compare Command: Supported 00:07:25.146 Write Uncorrectable Command: Not Supported 00:07:25.146 Dataset Management Command: Supported 00:07:25.146 Write Zeroes Command: Supported 00:07:25.146 Set Features Save Field: Supported 00:07:25.146 Reservations: Not Supported 00:07:25.146 Timestamp: Supported 00:07:25.146 Copy: Supported 00:07:25.146 Volatile Write Cache: Present 00:07:25.146 Atomic Write Unit (Normal): 1 00:07:25.146 Atomic Write Unit (PFail): 1 00:07:25.146 Atomic Compare & Write Unit: 1 00:07:25.146 Fused Compare & Write: Not Supported 00:07:25.146 Scatter-Gather List 00:07:25.146 SGL Command Set: Supported 00:07:25.146 SGL Keyed: Not Supported 00:07:25.146 SGL Bit Bucket Descriptor: Not Supported 00:07:25.146 SGL Metadata Pointer: Not Supported 00:07:25.146 Oversized SGL: Not Supported 00:07:25.146 SGL Metadata Address: Not Supported 00:07:25.146 SGL Offset: Not Supported 00:07:25.146 Transport SGL Data Block: Not Supported 00:07:25.146 Replay Protected Memory Block: Not Supported 00:07:25.146 00:07:25.146 Firmware Slot Information 00:07:25.146 ========================= 00:07:25.146 Active slot: 1 00:07:25.146 Slot 1 Firmware Revision: 1.0 00:07:25.146 00:07:25.146 00:07:25.146 Commands Supported and Effects 00:07:25.146 ============================== 00:07:25.146 Admin Commands 00:07:25.146 -------------- 00:07:25.146 Delete I/O Submission Queue (00h): Supported 00:07:25.146 Create I/O Submission Queue (01h): Supported 00:07:25.146 Get Log Page (02h): Supported 00:07:25.146 Delete I/O Completion Queue (04h): Supported 00:07:25.146 Create I/O Completion Queue (05h): Supported 00:07:25.146 Identify (06h): Supported 00:07:25.146 Abort (08h): Supported 00:07:25.146 Set Features (09h): Supported 00:07:25.146 Get Features (0Ah): Supported 00:07:25.146 Asynchronous Event Request (0Ch): Supported 00:07:25.146 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.146 Directive Send (19h): Supported 00:07:25.146 Directive Receive (1Ah): Supported 00:07:25.146 Virtualization Management (1Ch): Supported 00:07:25.146 Doorbell Buffer Config (7Ch): Supported 00:07:25.146 Format NVM (80h): Supported LBA-Change 00:07:25.146 I/O Commands 00:07:25.146 ------------ 00:07:25.146 Flush (00h): Supported LBA-Change 00:07:25.146 Write (01h): Supported LBA-Change 00:07:25.146 Read (02h): Supported 00:07:25.146 Compare (05h): Supported 00:07:25.146 Write Zeroes (08h): Supported LBA-Change 00:07:25.146 Dataset Management (09h): Supported LBA-Change 00:07:25.146 Unknown (0Ch): Supported 00:07:25.146 Unknown (12h): Supported 00:07:25.146 Copy (19h): Supported LBA-Change 00:07:25.146 Unknown (1Dh): Supported LBA-Change 00:07:25.146 00:07:25.146 Error Log 00:07:25.146 ========= 00:07:25.146 00:07:25.146 Arbitration 00:07:25.146 =========== 00:07:25.146 Arbitration Burst: no limit 00:07:25.146 00:07:25.146 Power Management 00:07:25.146 ================ 00:07:25.146 Number of Power States: 1 00:07:25.146 Current Power State: Power State #0 00:07:25.146 Power State #0: 00:07:25.146 Max Power: 25.00 W 00:07:25.146 Non-Operational State: Operational 00:07:25.146 Entry Latency: 16 microseconds 00:07:25.146 Exit Latency: 4 microseconds 00:07:25.146 Relative Read Throughput: 0 00:07:25.146 Relative Read Latency: 0 00:07:25.146 Relative Write Throughput: 0 00:07:25.146 Relative Write Latency: 0 00:07:25.146 Idle Power: Not Reported 00:07:25.146 Active Power: Not Reported 00:07:25.146 Non-Operational Permissive Mode: Not Supported 00:07:25.146 00:07:25.146 Health Information 00:07:25.146 ================== 00:07:25.146 Critical Warnings: 00:07:25.146 Available Spare Space: OK 00:07:25.146 Temperature: OK 00:07:25.146 Device Reliability: OK 00:07:25.146 Read Only: No 00:07:25.146 Volatile Memory Backup: OK 00:07:25.146 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.146 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.146 Available Spare: 0% 00:07:25.146 Available Spare Threshold: 0% 00:07:25.146 Life Percentage Used: 0% 00:07:25.146 Data Units Read: 2096 00:07:25.146 Data Units Written: 1883 00:07:25.146 Host Read Commands: 105354 00:07:25.146 Host Write Commands: 103623 00:07:25.146 Controller Busy Time: 0 minutes 00:07:25.146 Power Cycles: 0 00:07:25.146 Power On Hours: 0 hours 00:07:25.146 Unsafe Shutdowns: 0 00:07:25.146 Unrecoverable Media Errors: 0 00:07:25.146 Lifetime Error Log Entries: 0 00:07:25.146 Warning Temperature Time: 0 minutes 00:07:25.146 Critical Temperature Time: 0 minutes 00:07:25.146 00:07:25.146 Number of Queues 00:07:25.146 ================ 00:07:25.146 Number of I/O Submission Queues: 64 00:07:25.146 Number of I/O Completion Queues: 64 00:07:25.146 00:07:25.146 ZNS Specific Controller Data 00:07:25.146 ============================ 00:07:25.146 Zone Append Size Limit: 0 00:07:25.146 00:07:25.146 00:07:25.146 Active Namespaces 00:07:25.146 ================= 00:07:25.146 Namespace ID:1 00:07:25.146 Error Recovery Timeout: Unlimited 00:07:25.146 Command Set Identifier: NVM (00h) 00:07:25.146 Deallocate: Supported 00:07:25.146 Deallocated/Unwritten Error: Supported 00:07:25.146 Deallocated Read Value: All 0x00 00:07:25.146 Deallocate in Write Zeroes: Not Supported 00:07:25.146 Deallocated Guard Field: 0xFFFF 00:07:25.146 Flush: Supported 00:07:25.146 Reservation: Not Supported 00:07:25.146 Namespace Sharing Capabilities: Private 00:07:25.147 Size (in LBAs): 1048576 (4GiB) 00:07:25.147 Capacity (in LBAs): 1048576 (4GiB) 00:07:25.147 Utilization (in LBAs): 1048576 (4GiB) 00:07:25.147 Thin Provisioning: Not Supported 00:07:25.147 Per-NS Atomic Units: No 00:07:25.147 Maximum Single Source Range Length: 128 00:07:25.147 Maximum Copy Length: 128 00:07:25.147 Maximum Source Range Count: 128 00:07:25.147 NGUID/EUI64 Never Reused: No 00:07:25.147 Namespace Write Protected: No 00:07:25.147 Number of LBA Formats: 8 00:07:25.147 Current LBA Format: LBA Format #04 00:07:25.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.147 00:07:25.147 NVM Specific Namespace Data 00:07:25.147 =========================== 00:07:25.147 Logical Block Storage Tag Mask: 0 00:07:25.147 Protection Information Capabilities: 00:07:25.147 16b Guard Protection Information Storage Tag Support: No 00:07:25.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.147 Storage Tag Check Read Support: No 00:07:25.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Namespace ID:2 00:07:25.147 Error Recovery Timeout: Unlimited 00:07:25.147 Command Set Identifier: NVM (00h) 00:07:25.147 Deallocate: Supported 00:07:25.147 Deallocated/Unwritten Error: Supported 00:07:25.147 Deallocated Read Value: All 0x00 00:07:25.147 Deallocate in Write Zeroes: Not Supported 00:07:25.147 Deallocated Guard Field: 0xFFFF 00:07:25.147 Flush: Supported 00:07:25.147 Reservation: Not Supported 00:07:25.147 Namespace Sharing Capabilities: Private 00:07:25.147 Size (in LBAs): 1048576 (4GiB) 00:07:25.147 Capacity (in LBAs): 1048576 (4GiB) 00:07:25.147 Utilization (in LBAs): 1048576 (4GiB) 00:07:25.147 Thin Provisioning: Not Supported 00:07:25.147 Per-NS Atomic Units: No 00:07:25.147 Maximum Single Source Range Length: 128 00:07:25.147 Maximum Copy Length: 128 00:07:25.147 Maximum Source Range Count: 128 00:07:25.147 NGUID/EUI64 Never Reused: No 00:07:25.147 Namespace Write Protected: No 00:07:25.147 Number of LBA Formats: 8 00:07:25.147 Current LBA Format: LBA Format #04 00:07:25.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.147 00:07:25.147 NVM Specific Namespace Data 00:07:25.147 =========================== 00:07:25.147 Logical Block Storage Tag Mask: 0 00:07:25.147 Protection Information Capabilities: 00:07:25.147 16b Guard Protection Information Storage Tag Support: No 00:07:25.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.147 Storage Tag Check Read Support: No 00:07:25.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Namespace ID:3 00:07:25.147 Error Recovery Timeout: Unlimited 00:07:25.147 Command Set Identifier: NVM (00h) 00:07:25.147 Deallocate: Supported 00:07:25.147 Deallocated/Unwritten Error: Supported 00:07:25.147 Deallocated Read Value: All 0x00 00:07:25.147 Deallocate in Write Zeroes: Not Supported 00:07:25.147 Deallocated Guard Field: 0xFFFF 00:07:25.147 Flush: Supported 00:07:25.147 Reservation: Not Supported 00:07:25.147 Namespace Sharing Capabilities: Private 00:07:25.147 Size (in LBAs): 1048576 (4GiB) 00:07:25.147 Capacity (in LBAs): 1048576 (4GiB) 00:07:25.147 Utilization (in LBAs): 1048576 (4GiB) 00:07:25.147 Thin Provisioning: Not Supported 00:07:25.147 Per-NS Atomic Units: No 00:07:25.147 Maximum Single Source Range Length: 128 00:07:25.147 Maximum Copy Length: 128 00:07:25.147 Maximum Source Range Count: 128 00:07:25.147 NGUID/EUI64 Never Reused: No 00:07:25.147 Namespace Write Protected: No 00:07:25.147 Number of LBA Formats: 8 00:07:25.147 Current LBA Format: LBA Format #04 00:07:25.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.147 00:07:25.147 NVM Specific Namespace Data 00:07:25.147 =========================== 00:07:25.147 Logical Block Storage Tag Mask: 0 00:07:25.147 Protection Information Capabilities: 00:07:25.147 16b Guard Protection Information Storage Tag Support: No 00:07:25.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.147 Storage Tag Check Read Support: No 00:07:25.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.147 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:25.147 09:40:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:25.407 ===================================================== 00:07:25.407 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:25.407 ===================================================== 00:07:25.407 Controller Capabilities/Features 00:07:25.407 ================================ 00:07:25.407 Vendor ID: 1b36 00:07:25.407 Subsystem Vendor ID: 1af4 00:07:25.407 Serial Number: 12343 00:07:25.407 Model Number: QEMU NVMe Ctrl 00:07:25.407 Firmware Version: 8.0.0 00:07:25.407 Recommended Arb Burst: 6 00:07:25.407 IEEE OUI Identifier: 00 54 52 00:07:25.407 Multi-path I/O 00:07:25.407 May have multiple subsystem ports: No 00:07:25.407 May have multiple controllers: Yes 00:07:25.407 Associated with SR-IOV VF: No 00:07:25.407 Max Data Transfer Size: 524288 00:07:25.407 Max Number of Namespaces: 256 00:07:25.407 Max Number of I/O Queues: 64 00:07:25.407 NVMe Specification Version (VS): 1.4 00:07:25.407 NVMe Specification Version (Identify): 1.4 00:07:25.407 Maximum Queue Entries: 2048 00:07:25.407 Contiguous Queues Required: Yes 00:07:25.407 Arbitration Mechanisms Supported 00:07:25.407 Weighted Round Robin: Not Supported 00:07:25.407 Vendor Specific: Not Supported 00:07:25.407 Reset Timeout: 7500 ms 00:07:25.407 Doorbell Stride: 4 bytes 00:07:25.407 NVM Subsystem Reset: Not Supported 00:07:25.407 Command Sets Supported 00:07:25.407 NVM Command Set: Supported 00:07:25.407 Boot Partition: Not Supported 00:07:25.407 Memory Page Size Minimum: 4096 bytes 00:07:25.407 Memory Page Size Maximum: 65536 bytes 00:07:25.407 Persistent Memory Region: Not Supported 00:07:25.407 Optional Asynchronous Events Supported 00:07:25.407 Namespace Attribute Notices: Supported 00:07:25.407 Firmware Activation Notices: Not Supported 00:07:25.407 ANA Change Notices: Not Supported 00:07:25.407 PLE Aggregate Log Change Notices: Not Supported 00:07:25.407 LBA Status Info Alert Notices: Not Supported 00:07:25.407 EGE Aggregate Log Change Notices: Not Supported 00:07:25.407 Normal NVM Subsystem Shutdown event: Not Supported 00:07:25.407 Zone Descriptor Change Notices: Not Supported 00:07:25.407 Discovery Log Change Notices: Not Supported 00:07:25.407 Controller Attributes 00:07:25.407 128-bit Host Identifier: Not Supported 00:07:25.407 Non-Operational Permissive Mode: Not Supported 00:07:25.407 NVM Sets: Not Supported 00:07:25.407 Read Recovery Levels: Not Supported 00:07:25.407 Endurance Groups: Supported 00:07:25.407 Predictable Latency Mode: Not Supported 00:07:25.407 Traffic Based Keep ALive: Not Supported 00:07:25.407 Namespace Granularity: Not Supported 00:07:25.407 SQ Associations: Not Supported 00:07:25.407 UUID List: Not Supported 00:07:25.407 Multi-Domain Subsystem: Not Supported 00:07:25.407 Fixed Capacity Management: Not Supported 00:07:25.407 Variable Capacity Management: Not Supported 00:07:25.407 Delete Endurance Group: Not Supported 00:07:25.407 Delete NVM Set: Not Supported 00:07:25.407 Extended LBA Formats Supported: Supported 00:07:25.407 Flexible Data Placement Supported: Supported 00:07:25.407 00:07:25.407 Controller Memory Buffer Support 00:07:25.407 ================================ 00:07:25.407 Supported: No 00:07:25.407 00:07:25.407 Persistent Memory Region Support 00:07:25.407 ================================ 00:07:25.407 Supported: No 00:07:25.407 00:07:25.407 Admin Command Set Attributes 00:07:25.407 ============================ 00:07:25.407 Security Send/Receive: Not Supported 00:07:25.407 Format NVM: Supported 00:07:25.407 Firmware Activate/Download: Not Supported 00:07:25.407 Namespace Management: Supported 00:07:25.407 Device Self-Test: Not Supported 00:07:25.407 Directives: Supported 00:07:25.407 NVMe-MI: Not Supported 00:07:25.407 Virtualization Management: Not Supported 00:07:25.407 Doorbell Buffer Config: Supported 00:07:25.407 Get LBA Status Capability: Not Supported 00:07:25.407 Command & Feature Lockdown Capability: Not Supported 00:07:25.407 Abort Command Limit: 4 00:07:25.407 Async Event Request Limit: 4 00:07:25.407 Number of Firmware Slots: N/A 00:07:25.408 Firmware Slot 1 Read-Only: N/A 00:07:25.408 Firmware Activation Without Reset: N/A 00:07:25.408 Multiple Update Detection Support: N/A 00:07:25.408 Firmware Update Granularity: No Information Provided 00:07:25.408 Per-Namespace SMART Log: Yes 00:07:25.408 Asymmetric Namespace Access Log Page: Not Supported 00:07:25.408 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:25.408 Command Effects Log Page: Supported 00:07:25.408 Get Log Page Extended Data: Supported 00:07:25.408 Telemetry Log Pages: Not Supported 00:07:25.408 Persistent Event Log Pages: Not Supported 00:07:25.408 Supported Log Pages Log Page: May Support 00:07:25.408 Commands Supported & Effects Log Page: Not Supported 00:07:25.408 Feature Identifiers & Effects Log Page:May Support 00:07:25.408 NVMe-MI Commands & Effects Log Page: May Support 00:07:25.408 Data Area 4 for Telemetry Log: Not Supported 00:07:25.408 Error Log Page Entries Supported: 1 00:07:25.408 Keep Alive: Not Supported 00:07:25.408 00:07:25.408 NVM Command Set Attributes 00:07:25.408 ========================== 00:07:25.408 Submission Queue Entry Size 00:07:25.408 Max: 64 00:07:25.408 Min: 64 00:07:25.408 Completion Queue Entry Size 00:07:25.408 Max: 16 00:07:25.408 Min: 16 00:07:25.408 Number of Namespaces: 256 00:07:25.408 Compare Command: Supported 00:07:25.408 Write Uncorrectable Command: Not Supported 00:07:25.408 Dataset Management Command: Supported 00:07:25.408 Write Zeroes Command: Supported 00:07:25.408 Set Features Save Field: Supported 00:07:25.408 Reservations: Not Supported 00:07:25.408 Timestamp: Supported 00:07:25.408 Copy: Supported 00:07:25.408 Volatile Write Cache: Present 00:07:25.408 Atomic Write Unit (Normal): 1 00:07:25.408 Atomic Write Unit (PFail): 1 00:07:25.408 Atomic Compare & Write Unit: 1 00:07:25.408 Fused Compare & Write: Not Supported 00:07:25.408 Scatter-Gather List 00:07:25.408 SGL Command Set: Supported 00:07:25.408 SGL Keyed: Not Supported 00:07:25.408 SGL Bit Bucket Descriptor: Not Supported 00:07:25.408 SGL Metadata Pointer: Not Supported 00:07:25.408 Oversized SGL: Not Supported 00:07:25.408 SGL Metadata Address: Not Supported 00:07:25.408 SGL Offset: Not Supported 00:07:25.408 Transport SGL Data Block: Not Supported 00:07:25.408 Replay Protected Memory Block: Not Supported 00:07:25.408 00:07:25.408 Firmware Slot Information 00:07:25.408 ========================= 00:07:25.408 Active slot: 1 00:07:25.408 Slot 1 Firmware Revision: 1.0 00:07:25.408 00:07:25.408 00:07:25.408 Commands Supported and Effects 00:07:25.408 ============================== 00:07:25.408 Admin Commands 00:07:25.408 -------------- 00:07:25.408 Delete I/O Submission Queue (00h): Supported 00:07:25.408 Create I/O Submission Queue (01h): Supported 00:07:25.408 Get Log Page (02h): Supported 00:07:25.408 Delete I/O Completion Queue (04h): Supported 00:07:25.408 Create I/O Completion Queue (05h): Supported 00:07:25.408 Identify (06h): Supported 00:07:25.408 Abort (08h): Supported 00:07:25.408 Set Features (09h): Supported 00:07:25.408 Get Features (0Ah): Supported 00:07:25.408 Asynchronous Event Request (0Ch): Supported 00:07:25.408 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:25.408 Directive Send (19h): Supported 00:07:25.408 Directive Receive (1Ah): Supported 00:07:25.408 Virtualization Management (1Ch): Supported 00:07:25.408 Doorbell Buffer Config (7Ch): Supported 00:07:25.408 Format NVM (80h): Supported LBA-Change 00:07:25.408 I/O Commands 00:07:25.408 ------------ 00:07:25.408 Flush (00h): Supported LBA-Change 00:07:25.408 Write (01h): Supported LBA-Change 00:07:25.408 Read (02h): Supported 00:07:25.408 Compare (05h): Supported 00:07:25.408 Write Zeroes (08h): Supported LBA-Change 00:07:25.408 Dataset Management (09h): Supported LBA-Change 00:07:25.408 Unknown (0Ch): Supported 00:07:25.408 Unknown (12h): Supported 00:07:25.408 Copy (19h): Supported LBA-Change 00:07:25.408 Unknown (1Dh): Supported LBA-Change 00:07:25.408 00:07:25.408 Error Log 00:07:25.408 ========= 00:07:25.408 00:07:25.408 Arbitration 00:07:25.408 =========== 00:07:25.408 Arbitration Burst: no limit 00:07:25.408 00:07:25.408 Power Management 00:07:25.408 ================ 00:07:25.408 Number of Power States: 1 00:07:25.408 Current Power State: Power State #0 00:07:25.408 Power State #0: 00:07:25.408 Max Power: 25.00 W 00:07:25.408 Non-Operational State: Operational 00:07:25.408 Entry Latency: 16 microseconds 00:07:25.408 Exit Latency: 4 microseconds 00:07:25.408 Relative Read Throughput: 0 00:07:25.408 Relative Read Latency: 0 00:07:25.408 Relative Write Throughput: 0 00:07:25.408 Relative Write Latency: 0 00:07:25.408 Idle Power: Not Reported 00:07:25.408 Active Power: Not Reported 00:07:25.408 Non-Operational Permissive Mode: Not Supported 00:07:25.408 00:07:25.408 Health Information 00:07:25.408 ================== 00:07:25.408 Critical Warnings: 00:07:25.408 Available Spare Space: OK 00:07:25.408 Temperature: OK 00:07:25.408 Device Reliability: OK 00:07:25.408 Read Only: No 00:07:25.408 Volatile Memory Backup: OK 00:07:25.408 Current Temperature: 323 Kelvin (50 Celsius) 00:07:25.408 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:25.408 Available Spare: 0% 00:07:25.408 Available Spare Threshold: 0% 00:07:25.408 Life Percentage Used: 0% 00:07:25.408 Data Units Read: 825 00:07:25.408 Data Units Written: 754 00:07:25.408 Host Read Commands: 36161 00:07:25.408 Host Write Commands: 35584 00:07:25.408 Controller Busy Time: 0 minutes 00:07:25.408 Power Cycles: 0 00:07:25.408 Power On Hours: 0 hours 00:07:25.408 Unsafe Shutdowns: 0 00:07:25.408 Unrecoverable Media Errors: 0 00:07:25.408 Lifetime Error Log Entries: 0 00:07:25.408 Warning Temperature Time: 0 minutes 00:07:25.408 Critical Temperature Time: 0 minutes 00:07:25.408 00:07:25.408 Number of Queues 00:07:25.408 ================ 00:07:25.408 Number of I/O Submission Queues: 64 00:07:25.408 Number of I/O Completion Queues: 64 00:07:25.408 00:07:25.408 ZNS Specific Controller Data 00:07:25.408 ============================ 00:07:25.408 Zone Append Size Limit: 0 00:07:25.408 00:07:25.408 00:07:25.408 Active Namespaces 00:07:25.408 ================= 00:07:25.408 Namespace ID:1 00:07:25.408 Error Recovery Timeout: Unlimited 00:07:25.408 Command Set Identifier: NVM (00h) 00:07:25.408 Deallocate: Supported 00:07:25.408 Deallocated/Unwritten Error: Supported 00:07:25.408 Deallocated Read Value: All 0x00 00:07:25.408 Deallocate in Write Zeroes: Not Supported 00:07:25.408 Deallocated Guard Field: 0xFFFF 00:07:25.408 Flush: Supported 00:07:25.408 Reservation: Not Supported 00:07:25.408 Namespace Sharing Capabilities: Multiple Controllers 00:07:25.408 Size (in LBAs): 262144 (1GiB) 00:07:25.408 Capacity (in LBAs): 262144 (1GiB) 00:07:25.408 Utilization (in LBAs): 262144 (1GiB) 00:07:25.408 Thin Provisioning: Not Supported 00:07:25.408 Per-NS Atomic Units: No 00:07:25.408 Maximum Single Source Range Length: 128 00:07:25.408 Maximum Copy Length: 128 00:07:25.408 Maximum Source Range Count: 128 00:07:25.408 NGUID/EUI64 Never Reused: No 00:07:25.408 Namespace Write Protected: No 00:07:25.408 Endurance group ID: 1 00:07:25.408 Number of LBA Formats: 8 00:07:25.408 Current LBA Format: LBA Format #04 00:07:25.408 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:25.408 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:25.408 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:25.408 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:25.408 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:25.408 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:25.408 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:25.408 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:25.408 00:07:25.408 Get Feature FDP: 00:07:25.408 ================ 00:07:25.408 Enabled: Yes 00:07:25.408 FDP configuration index: 0 00:07:25.408 00:07:25.408 FDP configurations log page 00:07:25.408 =========================== 00:07:25.408 Number of FDP configurations: 1 00:07:25.408 Version: 0 00:07:25.408 Size: 112 00:07:25.408 FDP Configuration Descriptor: 0 00:07:25.408 Descriptor Size: 96 00:07:25.408 Reclaim Group Identifier format: 2 00:07:25.408 FDP Volatile Write Cache: Not Present 00:07:25.408 FDP Configuration: Valid 00:07:25.408 Vendor Specific Size: 0 00:07:25.408 Number of Reclaim Groups: 2 00:07:25.408 Number of Recalim Unit Handles: 8 00:07:25.408 Max Placement Identifiers: 128 00:07:25.408 Number of Namespaces Suppprted: 256 00:07:25.408 Reclaim unit Nominal Size: 6000000 bytes 00:07:25.408 Estimated Reclaim Unit Time Limit: Not Reported 00:07:25.409 RUH Desc #000: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #001: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #002: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #003: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #004: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #005: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #006: RUH Type: Initially Isolated 00:07:25.409 RUH Desc #007: RUH Type: Initially Isolated 00:07:25.409 00:07:25.409 FDP reclaim unit handle usage log page 00:07:25.409 ====================================== 00:07:25.409 Number of Reclaim Unit Handles: 8 00:07:25.409 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:25.409 RUH Usage Desc #001: RUH Attributes: Unused 00:07:25.409 RUH Usage Desc #002: RUH Attributes: Unused 00:07:25.409 RUH Usage Desc #003: RUH Attributes: Unused 00:07:25.409 RUH Usage Desc #004: RUH Attributes: Unused 00:07:25.409 RUH Usage Desc #005: RUH Attributes: Unused 00:07:25.409 RUH Usage Desc #006: RUH Attributes: Unused 00:07:25.409 RUH Usage Desc #007: RUH Attributes: Unused 00:07:25.409 00:07:25.409 FDP statistics log page 00:07:25.409 ======================= 00:07:25.409 Host bytes with metadata written: 481402880 00:07:25.409 Media bytes with metadata written: 481456128 00:07:25.409 Media bytes erased: 0 00:07:25.409 00:07:25.409 FDP events log page 00:07:25.409 =================== 00:07:25.409 Number of FDP events: 0 00:07:25.409 00:07:25.409 NVM Specific Namespace Data 00:07:25.409 =========================== 00:07:25.409 Logical Block Storage Tag Mask: 0 00:07:25.409 Protection Information Capabilities: 00:07:25.409 16b Guard Protection Information Storage Tag Support: No 00:07:25.409 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:25.409 Storage Tag Check Read Support: No 00:07:25.409 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:25.409 00:07:25.409 real 0m1.210s 00:07:25.409 user 0m0.445s 00:07:25.409 sys 0m0.548s 00:07:25.409 09:40:04 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.409 09:40:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:25.409 ************************************ 00:07:25.409 END TEST nvme_identify 00:07:25.409 ************************************ 00:07:25.409 09:40:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:25.409 09:40:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.409 09:40:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.409 09:40:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.409 ************************************ 00:07:25.409 START TEST nvme_perf 00:07:25.409 ************************************ 00:07:25.409 09:40:04 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:25.409 09:40:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:26.787 Initializing NVMe Controllers 00:07:26.787 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:26.787 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:26.787 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:26.787 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:26.787 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:26.787 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:26.787 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:26.787 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:26.787 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:26.787 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:26.787 Initialization complete. Launching workers. 00:07:26.787 ======================================================== 00:07:26.787 Latency(us) 00:07:26.787 Device Information : IOPS MiB/s Average min max 00:07:26.787 PCIE (0000:00:13.0) NSID 1 from core 0: 18394.25 215.56 6968.31 5556.74 32676.16 00:07:26.787 PCIE (0000:00:10.0) NSID 1 from core 0: 18394.25 215.56 6957.39 5481.30 31109.60 00:07:26.787 PCIE (0000:00:11.0) NSID 1 from core 0: 18394.25 215.56 6947.44 5583.71 29314.93 00:07:26.787 PCIE (0000:00:12.0) NSID 1 from core 0: 18394.25 215.56 6936.62 5541.02 27823.26 00:07:26.787 PCIE (0000:00:12.0) NSID 2 from core 0: 18394.25 215.56 6926.05 5590.13 26071.92 00:07:26.787 PCIE (0000:00:12.0) NSID 3 from core 0: 18458.12 216.31 6891.59 5553.77 21012.19 00:07:26.787 ======================================================== 00:07:26.787 Total : 110429.40 1294.09 6937.87 5481.30 32676.16 00:07:26.787 00:07:26.787 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:26.787 ================================================================================= 00:07:26.787 1.00000% : 5721.797us 00:07:26.787 10.00000% : 5873.034us 00:07:26.787 25.00000% : 6074.683us 00:07:26.787 50.00000% : 6377.157us 00:07:26.787 75.00000% : 6805.662us 00:07:26.787 90.00000% : 8670.917us 00:07:26.787 95.00000% : 10636.997us 00:07:26.787 98.00000% : 11796.480us 00:07:26.787 99.00000% : 12401.428us 00:07:26.787 99.50000% : 27625.945us 00:07:26.787 99.90000% : 32263.877us 00:07:26.787 99.99000% : 32667.175us 00:07:26.787 99.99900% : 32868.825us 00:07:26.787 99.99990% : 32868.825us 00:07:26.787 99.99999% : 32868.825us 00:07:26.787 00:07:26.787 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:26.787 ================================================================================= 00:07:26.787 1.00000% : 5646.178us 00:07:26.787 10.00000% : 5822.622us 00:07:26.787 25.00000% : 6049.477us 00:07:26.787 50.00000% : 6402.363us 00:07:26.787 75.00000% : 6856.074us 00:07:26.787 90.00000% : 8872.566us 00:07:26.787 95.00000% : 10435.348us 00:07:26.787 98.00000% : 12048.542us 00:07:26.787 99.00000% : 12502.252us 00:07:26.787 99.50000% : 26012.751us 00:07:26.787 99.90000% : 30852.332us 00:07:26.787 99.99000% : 31255.631us 00:07:26.787 99.99900% : 31255.631us 00:07:26.787 99.99990% : 31255.631us 00:07:26.787 99.99999% : 31255.631us 00:07:26.787 00:07:26.787 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:26.787 ================================================================================= 00:07:26.787 1.00000% : 5721.797us 00:07:26.787 10.00000% : 5873.034us 00:07:26.787 25.00000% : 6074.683us 00:07:26.787 50.00000% : 6377.157us 00:07:26.787 75.00000% : 6805.662us 00:07:26.787 90.00000% : 9023.803us 00:07:26.787 95.00000% : 10233.698us 00:07:26.787 98.00000% : 12199.778us 00:07:26.787 99.00000% : 12703.902us 00:07:26.787 99.50000% : 24197.908us 00:07:26.787 99.90000% : 29037.489us 00:07:26.787 99.99000% : 29440.788us 00:07:26.787 99.99900% : 29440.788us 00:07:26.787 99.99990% : 29440.788us 00:07:26.787 99.99999% : 29440.788us 00:07:26.787 00:07:26.787 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:26.787 ================================================================================= 00:07:26.787 1.00000% : 5721.797us 00:07:26.787 10.00000% : 5873.034us 00:07:26.787 25.00000% : 6074.683us 00:07:26.787 50.00000% : 6377.157us 00:07:26.787 75.00000% : 6805.662us 00:07:26.787 90.00000% : 8973.391us 00:07:26.787 95.00000% : 10435.348us 00:07:26.787 98.00000% : 12149.366us 00:07:26.787 99.00000% : 12754.314us 00:07:26.787 99.50000% : 22786.363us 00:07:26.787 99.90000% : 27424.295us 00:07:26.787 99.99000% : 27827.594us 00:07:26.787 99.99900% : 27827.594us 00:07:26.787 99.99990% : 27827.594us 00:07:26.787 99.99999% : 27827.594us 00:07:26.787 00:07:26.787 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:26.787 ================================================================================= 00:07:26.787 1.00000% : 5721.797us 00:07:26.787 10.00000% : 5873.034us 00:07:26.787 25.00000% : 6074.683us 00:07:26.787 50.00000% : 6377.157us 00:07:26.787 75.00000% : 6856.074us 00:07:26.787 90.00000% : 8922.978us 00:07:26.787 95.00000% : 10485.760us 00:07:26.787 98.00000% : 11846.892us 00:07:26.787 99.00000% : 12603.077us 00:07:26.787 99.50000% : 21072.345us 00:07:26.787 99.90000% : 25710.277us 00:07:26.787 99.99000% : 26214.400us 00:07:26.787 99.99900% : 26214.400us 00:07:26.787 99.99990% : 26214.400us 00:07:26.787 99.99999% : 26214.400us 00:07:26.787 00:07:26.787 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:26.787 ================================================================================= 00:07:26.787 1.00000% : 5721.797us 00:07:26.787 10.00000% : 5873.034us 00:07:26.787 25.00000% : 6074.683us 00:07:26.787 50.00000% : 6377.157us 00:07:26.787 75.00000% : 6856.074us 00:07:26.787 90.00000% : 8822.154us 00:07:26.787 95.00000% : 10536.172us 00:07:26.787 98.00000% : 11645.243us 00:07:26.787 99.00000% : 12502.252us 00:07:26.787 99.50000% : 15930.289us 00:07:26.787 99.90000% : 20669.046us 00:07:26.787 99.99000% : 21072.345us 00:07:26.787 99.99900% : 21072.345us 00:07:26.787 99.99990% : 21072.345us 00:07:26.787 99.99999% : 21072.345us 00:07:26.787 00:07:26.787 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:26.787 ============================================================================== 00:07:26.787 Range in us Cumulative IO count 00:07:26.787 5545.354 - 5570.560: 0.0109% ( 2) 00:07:26.787 5570.560 - 5595.766: 0.0326% ( 4) 00:07:26.787 5595.766 - 5620.972: 0.0380% ( 1) 00:07:26.787 5620.972 - 5646.178: 0.0814% ( 8) 00:07:26.787 5646.178 - 5671.385: 0.2007% ( 22) 00:07:26.787 5671.385 - 5696.591: 0.6131% ( 76) 00:07:26.787 5696.591 - 5721.797: 1.2153% ( 111) 00:07:26.787 5721.797 - 5747.003: 2.0779% ( 159) 00:07:26.787 5747.003 - 5772.209: 3.4939% ( 261) 00:07:26.787 5772.209 - 5797.415: 5.1812% ( 311) 00:07:26.787 5797.415 - 5822.622: 7.0367% ( 342) 00:07:26.787 5822.622 - 5847.828: 8.7945% ( 324) 00:07:26.787 5847.828 - 5873.034: 10.4709% ( 309) 00:07:26.787 5873.034 - 5898.240: 12.2070% ( 320) 00:07:26.787 5898.240 - 5923.446: 13.8238% ( 298) 00:07:26.787 5923.446 - 5948.652: 15.6955% ( 345) 00:07:26.787 5948.652 - 5973.858: 17.6541% ( 361) 00:07:26.787 5973.858 - 5999.065: 19.6126% ( 361) 00:07:26.787 5999.065 - 6024.271: 21.6200% ( 370) 00:07:26.787 6024.271 - 6049.477: 23.6491% ( 374) 00:07:26.787 6049.477 - 6074.683: 25.7758% ( 392) 00:07:26.787 6074.683 - 6099.889: 27.8863% ( 389) 00:07:26.787 6099.889 - 6125.095: 29.9859% ( 387) 00:07:26.787 6125.095 - 6150.302: 32.0258% ( 376) 00:07:26.787 6150.302 - 6175.508: 34.0875% ( 380) 00:07:26.787 6175.508 - 6200.714: 36.1654% ( 383) 00:07:26.787 6200.714 - 6225.920: 38.2704% ( 388) 00:07:26.787 6225.920 - 6251.126: 40.4026% ( 393) 00:07:26.787 6251.126 - 6276.332: 42.5564% ( 397) 00:07:26.787 6276.332 - 6301.538: 44.7049% ( 396) 00:07:26.787 6301.538 - 6326.745: 46.8913% ( 403) 00:07:26.787 6326.745 - 6351.951: 49.0397% ( 396) 00:07:26.787 6351.951 - 6377.157: 51.3129% ( 419) 00:07:26.787 6377.157 - 6402.363: 53.4668% ( 397) 00:07:26.787 6402.363 - 6427.569: 55.7617% ( 423) 00:07:26.787 6427.569 - 6452.775: 58.0078% ( 414) 00:07:26.787 6452.775 - 6503.188: 62.4023% ( 810) 00:07:26.787 6503.188 - 6553.600: 66.3140% ( 721) 00:07:26.787 6553.600 - 6604.012: 69.1135% ( 516) 00:07:26.787 6604.012 - 6654.425: 71.1263% ( 371) 00:07:26.787 6654.425 - 6704.837: 72.5857% ( 269) 00:07:26.787 6704.837 - 6755.249: 73.8498% ( 233) 00:07:26.787 6755.249 - 6805.662: 75.0054% ( 213) 00:07:26.787 6805.662 - 6856.074: 75.9549% ( 175) 00:07:26.787 6856.074 - 6906.486: 76.8338% ( 162) 00:07:26.787 6906.486 - 6956.898: 77.7181% ( 163) 00:07:26.787 6956.898 - 7007.311: 78.4722% ( 139) 00:07:26.787 7007.311 - 7057.723: 79.2318% ( 140) 00:07:26.787 7057.723 - 7108.135: 79.9642% ( 135) 00:07:26.787 7108.135 - 7158.548: 80.6098% ( 119) 00:07:26.787 7158.548 - 7208.960: 81.1957% ( 108) 00:07:26.787 7208.960 - 7259.372: 81.6786% ( 89) 00:07:26.787 7259.372 - 7309.785: 82.0909% ( 76) 00:07:26.787 7309.785 - 7360.197: 82.5575% ( 86) 00:07:26.787 7360.197 - 7410.609: 82.9427% ( 71) 00:07:26.787 7410.609 - 7461.022: 83.2248% ( 52) 00:07:26.787 7461.022 - 7511.434: 83.5069% ( 52) 00:07:26.787 7511.434 - 7561.846: 83.7891% ( 52) 00:07:26.787 7561.846 - 7612.258: 84.0549% ( 49) 00:07:26.787 7612.258 - 7662.671: 84.2665% ( 39) 00:07:26.787 7662.671 - 7713.083: 84.5106% ( 45) 00:07:26.787 7713.083 - 7763.495: 84.7711% ( 48) 00:07:26.787 7763.495 - 7813.908: 85.0423% ( 50) 00:07:26.787 7813.908 - 7864.320: 85.2702% ( 42) 00:07:26.787 7864.320 - 7914.732: 85.5360% ( 49) 00:07:26.787 7914.732 - 7965.145: 85.7585% ( 41) 00:07:26.787 7965.145 - 8015.557: 86.0569% ( 55) 00:07:26.787 8015.557 - 8065.969: 86.3607% ( 56) 00:07:26.787 8065.969 - 8116.382: 86.6319% ( 50) 00:07:26.787 8116.382 - 8166.794: 86.9683% ( 62) 00:07:26.787 8166.794 - 8217.206: 87.3318% ( 67) 00:07:26.787 8217.206 - 8267.618: 87.6845% ( 65) 00:07:26.787 8267.618 - 8318.031: 88.0208% ( 62) 00:07:26.787 8318.031 - 8368.443: 88.3464% ( 60) 00:07:26.787 8368.443 - 8418.855: 88.6502% ( 56) 00:07:26.788 8418.855 - 8469.268: 88.9757% ( 60) 00:07:26.788 8469.268 - 8519.680: 89.3229% ( 64) 00:07:26.788 8519.680 - 8570.092: 89.6322% ( 57) 00:07:26.788 8570.092 - 8620.505: 89.9143% ( 52) 00:07:26.788 8620.505 - 8670.917: 90.2072% ( 54) 00:07:26.788 8670.917 - 8721.329: 90.4568% ( 46) 00:07:26.788 8721.329 - 8771.742: 90.7064% ( 46) 00:07:26.788 8771.742 - 8822.154: 90.8908% ( 34) 00:07:26.788 8822.154 - 8872.566: 91.0536% ( 30) 00:07:26.788 8872.566 - 8922.978: 91.2326% ( 33) 00:07:26.788 8922.978 - 8973.391: 91.3900% ( 29) 00:07:26.788 8973.391 - 9023.803: 91.5039% ( 21) 00:07:26.788 9023.803 - 9074.215: 91.5744% ( 13) 00:07:26.788 9074.215 - 9124.628: 91.6341% ( 11) 00:07:26.788 9124.628 - 9175.040: 91.6938% ( 11) 00:07:26.788 9175.040 - 9225.452: 91.7372% ( 8) 00:07:26.788 9225.452 - 9275.865: 91.8023% ( 12) 00:07:26.788 9275.865 - 9326.277: 91.8511% ( 9) 00:07:26.788 9326.277 - 9376.689: 91.9108% ( 11) 00:07:26.788 9376.689 - 9427.102: 91.9813% ( 13) 00:07:26.788 9427.102 - 9477.514: 92.0790% ( 18) 00:07:26.788 9477.514 - 9527.926: 92.1549% ( 14) 00:07:26.788 9527.926 - 9578.338: 92.2472% ( 17) 00:07:26.788 9578.338 - 9628.751: 92.3557% ( 20) 00:07:26.788 9628.751 - 9679.163: 92.4371% ( 15) 00:07:26.788 9679.163 - 9729.575: 92.5184% ( 15) 00:07:26.788 9729.575 - 9779.988: 92.5998% ( 15) 00:07:26.788 9779.988 - 9830.400: 92.7355% ( 25) 00:07:26.788 9830.400 - 9880.812: 92.8548% ( 22) 00:07:26.788 9880.812 - 9931.225: 92.9633% ( 20) 00:07:26.788 9931.225 - 9981.637: 93.1098% ( 27) 00:07:26.788 9981.637 - 10032.049: 93.2997% ( 35) 00:07:26.788 10032.049 - 10082.462: 93.4733% ( 32) 00:07:26.788 10082.462 - 10132.874: 93.6578% ( 34) 00:07:26.788 10132.874 - 10183.286: 93.8097% ( 28) 00:07:26.788 10183.286 - 10233.698: 93.9616% ( 28) 00:07:26.788 10233.698 - 10284.111: 94.1189% ( 29) 00:07:26.788 10284.111 - 10334.523: 94.2383% ( 22) 00:07:26.788 10334.523 - 10384.935: 94.3739% ( 25) 00:07:26.788 10384.935 - 10435.348: 94.5150% ( 26) 00:07:26.788 10435.348 - 10485.760: 94.6669% ( 28) 00:07:26.788 10485.760 - 10536.172: 94.8025% ( 25) 00:07:26.788 10536.172 - 10586.585: 94.9327% ( 24) 00:07:26.788 10586.585 - 10636.997: 95.0901% ( 29) 00:07:26.788 10636.997 - 10687.409: 95.2528% ( 30) 00:07:26.788 10687.409 - 10737.822: 95.3885% ( 25) 00:07:26.788 10737.822 - 10788.234: 95.5078% ( 22) 00:07:26.788 10788.234 - 10838.646: 95.6651% ( 29) 00:07:26.788 10838.646 - 10889.058: 95.7899% ( 23) 00:07:26.788 10889.058 - 10939.471: 95.9147% ( 23) 00:07:26.788 10939.471 - 10989.883: 96.0558% ( 26) 00:07:26.788 10989.883 - 11040.295: 96.1914% ( 25) 00:07:26.788 11040.295 - 11090.708: 96.3325% ( 26) 00:07:26.788 11090.708 - 11141.120: 96.4627% ( 24) 00:07:26.788 11141.120 - 11191.532: 96.6146% ( 28) 00:07:26.788 11191.532 - 11241.945: 96.7665% ( 28) 00:07:26.788 11241.945 - 11292.357: 96.9293% ( 30) 00:07:26.788 11292.357 - 11342.769: 97.0540% ( 23) 00:07:26.788 11342.769 - 11393.182: 97.1897% ( 25) 00:07:26.788 11393.182 - 11443.594: 97.3145% ( 23) 00:07:26.788 11443.594 - 11494.006: 97.4392% ( 23) 00:07:26.788 11494.006 - 11544.418: 97.5532% ( 21) 00:07:26.788 11544.418 - 11594.831: 97.6780% ( 23) 00:07:26.788 11594.831 - 11645.243: 97.7865% ( 20) 00:07:26.788 11645.243 - 11695.655: 97.8787% ( 17) 00:07:26.788 11695.655 - 11746.068: 97.9655% ( 16) 00:07:26.788 11746.068 - 11796.480: 98.0740% ( 20) 00:07:26.788 11796.480 - 11846.892: 98.1879% ( 21) 00:07:26.788 11846.892 - 11897.305: 98.3181% ( 24) 00:07:26.788 11897.305 - 11947.717: 98.4104% ( 17) 00:07:26.788 11947.717 - 11998.129: 98.4972% ( 16) 00:07:26.788 11998.129 - 12048.542: 98.6111% ( 21) 00:07:26.788 12048.542 - 12098.954: 98.7033% ( 17) 00:07:26.788 12098.954 - 12149.366: 98.7793% ( 14) 00:07:26.788 12149.366 - 12199.778: 98.8498% ( 13) 00:07:26.788 12199.778 - 12250.191: 98.8987% ( 9) 00:07:26.788 12250.191 - 12300.603: 98.9366% ( 7) 00:07:26.788 12300.603 - 12351.015: 98.9692% ( 6) 00:07:26.788 12351.015 - 12401.428: 99.0126% ( 8) 00:07:26.788 12401.428 - 12451.840: 99.0560% ( 8) 00:07:26.788 12451.840 - 12502.252: 99.0940% ( 7) 00:07:26.788 12502.252 - 12552.665: 99.1374% ( 8) 00:07:26.788 12552.665 - 12603.077: 99.1699% ( 6) 00:07:26.788 12603.077 - 12653.489: 99.2133% ( 8) 00:07:26.788 12653.489 - 12703.902: 99.2350% ( 4) 00:07:26.788 12703.902 - 12754.314: 99.2567% ( 4) 00:07:26.788 12754.314 - 12804.726: 99.2784% ( 4) 00:07:26.788 12804.726 - 12855.138: 99.3001% ( 4) 00:07:26.788 12855.138 - 12905.551: 99.3056% ( 1) 00:07:26.788 26617.698 - 26819.348: 99.3164% ( 2) 00:07:26.788 26819.348 - 27020.997: 99.3652% ( 9) 00:07:26.788 27020.997 - 27222.646: 99.4141% ( 9) 00:07:26.788 27222.646 - 27424.295: 99.4575% ( 8) 00:07:26.788 27424.295 - 27625.945: 99.5063% ( 9) 00:07:26.788 27625.945 - 27827.594: 99.5497% ( 8) 00:07:26.788 27827.594 - 28029.243: 99.5985% ( 9) 00:07:26.788 28029.243 - 28230.892: 99.6474% ( 9) 00:07:26.788 28230.892 - 28432.542: 99.6528% ( 1) 00:07:26.788 31053.982 - 31255.631: 99.6636% ( 2) 00:07:26.788 31255.631 - 31457.280: 99.7125% ( 9) 00:07:26.788 31457.280 - 31658.929: 99.7613% ( 9) 00:07:26.788 31658.929 - 31860.578: 99.8047% ( 8) 00:07:26.788 31860.578 - 32062.228: 99.8535% ( 9) 00:07:26.788 32062.228 - 32263.877: 99.9023% ( 9) 00:07:26.788 32263.877 - 32465.526: 99.9512% ( 9) 00:07:26.788 32465.526 - 32667.175: 99.9946% ( 8) 00:07:26.788 32667.175 - 32868.825: 100.0000% ( 1) 00:07:26.788 00:07:26.788 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:26.788 ============================================================================== 00:07:26.788 Range in us Cumulative IO count 00:07:26.788 5469.735 - 5494.942: 0.0109% ( 2) 00:07:26.788 5494.942 - 5520.148: 0.0326% ( 4) 00:07:26.788 5520.148 - 5545.354: 0.0597% ( 5) 00:07:26.788 5545.354 - 5570.560: 0.1411% ( 15) 00:07:26.788 5570.560 - 5595.766: 0.3581% ( 40) 00:07:26.788 5595.766 - 5620.972: 0.7324% ( 69) 00:07:26.788 5620.972 - 5646.178: 1.4540% ( 133) 00:07:26.788 5646.178 - 5671.385: 2.4740% ( 188) 00:07:26.788 5671.385 - 5696.591: 3.7055% ( 227) 00:07:26.788 5696.591 - 5721.797: 4.9045% ( 221) 00:07:26.788 5721.797 - 5747.003: 6.2283% ( 244) 00:07:26.788 5747.003 - 5772.209: 7.6552% ( 263) 00:07:26.788 5772.209 - 5797.415: 9.1960% ( 284) 00:07:26.788 5797.415 - 5822.622: 10.8181% ( 299) 00:07:26.788 5822.622 - 5847.828: 12.3264% ( 278) 00:07:26.788 5847.828 - 5873.034: 13.7967% ( 271) 00:07:26.788 5873.034 - 5898.240: 15.4622% ( 307) 00:07:26.788 5898.240 - 5923.446: 17.0030% ( 284) 00:07:26.788 5923.446 - 5948.652: 18.7174% ( 316) 00:07:26.788 5948.652 - 5973.858: 20.4264% ( 315) 00:07:26.788 5973.858 - 5999.065: 22.1137% ( 311) 00:07:26.788 5999.065 - 6024.271: 23.8390% ( 318) 00:07:26.788 6024.271 - 6049.477: 25.4829% ( 303) 00:07:26.788 6049.477 - 6074.683: 27.1647% ( 310) 00:07:26.788 6074.683 - 6099.889: 28.9985% ( 338) 00:07:26.788 6099.889 - 6125.095: 30.6641% ( 307) 00:07:26.788 6125.095 - 6150.302: 32.4219% ( 324) 00:07:26.788 6150.302 - 6175.508: 34.2828% ( 343) 00:07:26.788 6175.508 - 6200.714: 36.0189% ( 320) 00:07:26.788 6200.714 - 6225.920: 37.9123% ( 349) 00:07:26.788 6225.920 - 6251.126: 39.7244% ( 334) 00:07:26.788 6251.126 - 6276.332: 41.6558% ( 356) 00:07:26.788 6276.332 - 6301.538: 43.4787% ( 336) 00:07:26.788 6301.538 - 6326.745: 45.4319% ( 360) 00:07:26.788 6326.745 - 6351.951: 47.3036% ( 345) 00:07:26.788 6351.951 - 6377.157: 49.3164% ( 371) 00:07:26.788 6377.157 - 6402.363: 51.1610% ( 340) 00:07:26.788 6402.363 - 6427.569: 53.2606% ( 387) 00:07:26.788 6427.569 - 6452.775: 55.2843% ( 373) 00:07:26.788 6452.775 - 6503.188: 59.1743% ( 717) 00:07:26.788 6503.188 - 6553.600: 63.0425% ( 713) 00:07:26.788 6553.600 - 6604.012: 66.7101% ( 676) 00:07:26.788 6604.012 - 6654.425: 69.6452% ( 541) 00:07:26.788 6654.425 - 6704.837: 71.7285% ( 384) 00:07:26.788 6704.837 - 6755.249: 73.2910% ( 288) 00:07:26.788 6755.249 - 6805.662: 74.5063% ( 224) 00:07:26.788 6805.662 - 6856.074: 75.5859% ( 199) 00:07:26.788 6856.074 - 6906.486: 76.4703% ( 163) 00:07:26.788 6906.486 - 6956.898: 77.3709% ( 166) 00:07:26.788 6956.898 - 7007.311: 78.1359% ( 141) 00:07:26.788 7007.311 - 7057.723: 78.8194% ( 126) 00:07:26.788 7057.723 - 7108.135: 79.5410% ( 133) 00:07:26.788 7108.135 - 7158.548: 80.1758% ( 117) 00:07:26.788 7158.548 - 7208.960: 80.7997% ( 115) 00:07:26.788 7208.960 - 7259.372: 81.3748% ( 106) 00:07:26.788 7259.372 - 7309.785: 81.8902% ( 95) 00:07:26.788 7309.785 - 7360.197: 82.3676% ( 88) 00:07:26.788 7360.197 - 7410.609: 82.7908% ( 78) 00:07:26.788 7410.609 - 7461.022: 83.1489% ( 66) 00:07:26.788 7461.022 - 7511.434: 83.5069% ( 66) 00:07:26.788 7511.434 - 7561.846: 83.9084% ( 74) 00:07:26.788 7561.846 - 7612.258: 84.1960% ( 53) 00:07:26.788 7612.258 - 7662.671: 84.6246% ( 79) 00:07:26.788 7662.671 - 7713.083: 84.9175% ( 54) 00:07:26.788 7713.083 - 7763.495: 85.2376% ( 59) 00:07:26.788 7763.495 - 7813.908: 85.5577% ( 59) 00:07:26.788 7813.908 - 7864.320: 85.8724% ( 58) 00:07:26.788 7864.320 - 7914.732: 86.1274% ( 47) 00:07:26.788 7914.732 - 7965.145: 86.4312% ( 56) 00:07:26.788 7965.145 - 8015.557: 86.7405% ( 57) 00:07:26.788 8015.557 - 8065.969: 87.0714% ( 61) 00:07:26.788 8065.969 - 8116.382: 87.3372% ( 49) 00:07:26.788 8116.382 - 8166.794: 87.6248% ( 53) 00:07:26.788 8166.794 - 8217.206: 87.8309% ( 38) 00:07:26.788 8217.206 - 8267.618: 88.0588% ( 42) 00:07:26.788 8267.618 - 8318.031: 88.2975% ( 44) 00:07:26.788 8318.031 - 8368.443: 88.4983% ( 37) 00:07:26.788 8368.443 - 8418.855: 88.7424% ( 45) 00:07:26.788 8418.855 - 8469.268: 88.9648% ( 41) 00:07:26.788 8469.268 - 8519.680: 89.1222% ( 29) 00:07:26.788 8519.680 - 8570.092: 89.2741% ( 28) 00:07:26.788 8570.092 - 8620.505: 89.4151% ( 26) 00:07:26.788 8620.505 - 8670.917: 89.5833% ( 31) 00:07:26.788 8670.917 - 8721.329: 89.7135% ( 24) 00:07:26.788 8721.329 - 8771.742: 89.8329% ( 22) 00:07:26.788 8771.742 - 8822.154: 89.9902% ( 29) 00:07:26.788 8822.154 - 8872.566: 90.1855% ( 36) 00:07:26.788 8872.566 - 8922.978: 90.3537% ( 31) 00:07:26.788 8922.978 - 8973.391: 90.4622% ( 20) 00:07:26.788 8973.391 - 9023.803: 90.6141% ( 28) 00:07:26.788 9023.803 - 9074.215: 90.7661% ( 28) 00:07:26.788 9074.215 - 9124.628: 90.9397% ( 32) 00:07:26.788 9124.628 - 9175.040: 91.0482% ( 20) 00:07:26.788 9175.040 - 9225.452: 91.2109% ( 30) 00:07:26.788 9225.452 - 9275.865: 91.3628% ( 28) 00:07:26.788 9275.865 - 9326.277: 91.5256% ( 30) 00:07:26.788 9326.277 - 9376.689: 91.6558% ( 24) 00:07:26.788 9376.689 - 9427.102: 91.8403% ( 34) 00:07:26.788 9427.102 - 9477.514: 92.0410% ( 37) 00:07:26.788 9477.514 - 9527.926: 92.1929% ( 28) 00:07:26.788 9527.926 - 9578.338: 92.3503% ( 29) 00:07:26.788 9578.338 - 9628.751: 92.5130% ( 30) 00:07:26.788 9628.751 - 9679.163: 92.6704% ( 29) 00:07:26.788 9679.163 - 9729.575: 92.8168% ( 27) 00:07:26.788 9729.575 - 9779.988: 92.9959% ( 33) 00:07:26.788 9779.988 - 9830.400: 93.1641% ( 31) 00:07:26.788 9830.400 - 9880.812: 93.3105% ( 27) 00:07:26.788 9880.812 - 9931.225: 93.4679% ( 29) 00:07:26.788 9931.225 - 9981.637: 93.6469% ( 33) 00:07:26.788 9981.637 - 10032.049: 93.8151% ( 31) 00:07:26.788 10032.049 - 10082.462: 93.9887% ( 32) 00:07:26.788 10082.462 - 10132.874: 94.1623% ( 32) 00:07:26.788 10132.874 - 10183.286: 94.3305% ( 31) 00:07:26.788 10183.286 - 10233.698: 94.4987% ( 31) 00:07:26.788 10233.698 - 10284.111: 94.6723% ( 32) 00:07:26.788 10284.111 - 10334.523: 94.8242% ( 28) 00:07:26.788 10334.523 - 10384.935: 94.9761% ( 28) 00:07:26.788 10384.935 - 10435.348: 95.1118% ( 25) 00:07:26.788 10435.348 - 10485.760: 95.2311% ( 22) 00:07:26.788 10485.760 - 10536.172: 95.3396% ( 20) 00:07:26.788 10536.172 - 10586.585: 95.4427% ( 19) 00:07:26.788 10586.585 - 10636.997: 95.5675% ( 23) 00:07:26.788 10636.997 - 10687.409: 95.6760% ( 20) 00:07:26.788 10687.409 - 10737.822: 95.7845% ( 20) 00:07:26.788 10737.822 - 10788.234: 95.8496% ( 12) 00:07:26.788 10788.234 - 10838.646: 95.9201% ( 13) 00:07:26.788 10838.646 - 10889.058: 95.9852% ( 12) 00:07:26.788 10889.058 - 10939.471: 96.0612% ( 14) 00:07:26.788 10939.471 - 10989.883: 96.0938% ( 6) 00:07:26.788 10989.883 - 11040.295: 96.1751% ( 15) 00:07:26.788 11040.295 - 11090.708: 96.2348% ( 11) 00:07:26.788 11090.708 - 11141.120: 96.2945% ( 11) 00:07:26.788 11141.120 - 11191.532: 96.3325% ( 7) 00:07:26.788 11191.532 - 11241.945: 96.3921% ( 11) 00:07:26.788 11241.945 - 11292.357: 96.4735% ( 15) 00:07:26.788 11292.357 - 11342.769: 96.5549% ( 15) 00:07:26.788 11342.769 - 11393.182: 96.6363% ( 15) 00:07:26.788 11393.182 - 11443.594: 96.7122% ( 14) 00:07:26.788 11443.594 - 11494.006: 96.8099% ( 18) 00:07:26.788 11494.006 - 11544.418: 96.8967% ( 16) 00:07:26.788 11544.418 - 11594.831: 96.9889% ( 17) 00:07:26.788 11594.831 - 11645.243: 97.1137% ( 23) 00:07:26.788 11645.243 - 11695.655: 97.2114% ( 18) 00:07:26.788 11695.655 - 11746.068: 97.3362% ( 23) 00:07:26.788 11746.068 - 11796.480: 97.4772% ( 26) 00:07:26.788 11796.480 - 11846.892: 97.5857% ( 20) 00:07:26.788 11846.892 - 11897.305: 97.7051% ( 22) 00:07:26.788 11897.305 - 11947.717: 97.8516% ( 27) 00:07:26.788 11947.717 - 11998.129: 97.9818% ( 24) 00:07:26.788 11998.129 - 12048.542: 98.0957% ( 21) 00:07:26.788 12048.542 - 12098.954: 98.2476% ( 28) 00:07:26.788 12098.954 - 12149.366: 98.3670% ( 22) 00:07:26.788 12149.366 - 12199.778: 98.5189% ( 28) 00:07:26.788 12199.778 - 12250.191: 98.5948% ( 14) 00:07:26.788 12250.191 - 12300.603: 98.6925% ( 18) 00:07:26.788 12300.603 - 12351.015: 98.7847% ( 17) 00:07:26.788 12351.015 - 12401.428: 98.8715% ( 16) 00:07:26.788 12401.428 - 12451.840: 98.9475% ( 14) 00:07:26.788 12451.840 - 12502.252: 99.0289% ( 15) 00:07:26.788 12502.252 - 12552.665: 99.0831% ( 10) 00:07:26.788 12552.665 - 12603.077: 99.1428% ( 11) 00:07:26.788 12603.077 - 12653.489: 99.1862% ( 8) 00:07:26.788 12653.489 - 12703.902: 99.2188% ( 6) 00:07:26.788 12703.902 - 12754.314: 99.2459% ( 5) 00:07:26.788 12754.314 - 12804.726: 99.2784% ( 6) 00:07:26.788 12804.726 - 12855.138: 99.2947% ( 3) 00:07:26.788 12855.138 - 12905.551: 99.3056% ( 2) 00:07:26.788 25004.505 - 25105.329: 99.3273% ( 4) 00:07:26.788 25105.329 - 25206.154: 99.3435% ( 3) 00:07:26.788 25206.154 - 25306.978: 99.3652% ( 4) 00:07:26.788 25306.978 - 25407.803: 99.3869% ( 4) 00:07:26.788 25407.803 - 25508.628: 99.4032% ( 3) 00:07:26.788 25508.628 - 25609.452: 99.4303% ( 5) 00:07:26.788 25609.452 - 25710.277: 99.4520% ( 4) 00:07:26.788 25710.277 - 25811.102: 99.4737% ( 4) 00:07:26.788 25811.102 - 26012.751: 99.5117% ( 7) 00:07:26.788 26012.751 - 26214.400: 99.5551% ( 8) 00:07:26.788 26214.400 - 26416.049: 99.5985% ( 8) 00:07:26.788 26416.049 - 26617.698: 99.6419% ( 8) 00:07:26.788 26617.698 - 26819.348: 99.6528% ( 2) 00:07:26.788 29440.788 - 29642.437: 99.6853% ( 6) 00:07:26.788 29642.437 - 29844.086: 99.7342% ( 9) 00:07:26.788 29844.086 - 30045.735: 99.7667% ( 6) 00:07:26.788 30045.735 - 30247.385: 99.8155% ( 9) 00:07:26.788 30247.385 - 30449.034: 99.8589% ( 8) 00:07:26.788 30449.034 - 30650.683: 99.8969% ( 7) 00:07:26.788 30650.683 - 30852.332: 99.9349% ( 7) 00:07:26.788 30852.332 - 31053.982: 99.9837% ( 9) 00:07:26.788 31053.982 - 31255.631: 100.0000% ( 3) 00:07:26.788 00:07:26.788 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:26.788 ============================================================================== 00:07:26.788 Range in us Cumulative IO count 00:07:26.788 5570.560 - 5595.766: 0.0054% ( 1) 00:07:26.788 5595.766 - 5620.972: 0.0217% ( 3) 00:07:26.788 5620.972 - 5646.178: 0.0705% ( 9) 00:07:26.788 5646.178 - 5671.385: 0.2279% ( 29) 00:07:26.788 5671.385 - 5696.591: 0.7107% ( 89) 00:07:26.788 5696.591 - 5721.797: 1.3835% ( 124) 00:07:26.788 5721.797 - 5747.003: 2.4631% ( 199) 00:07:26.788 5747.003 - 5772.209: 3.7543% ( 238) 00:07:26.788 5772.209 - 5797.415: 5.3060% ( 286) 00:07:26.788 5797.415 - 5822.622: 6.8414% ( 283) 00:07:26.788 5822.622 - 5847.828: 8.4473% ( 296) 00:07:26.788 5847.828 - 5873.034: 10.2105% ( 325) 00:07:26.788 5873.034 - 5898.240: 11.9900% ( 328) 00:07:26.788 5898.240 - 5923.446: 13.8346% ( 340) 00:07:26.788 5923.446 - 5948.652: 15.8149% ( 365) 00:07:26.788 5948.652 - 5973.858: 17.7083% ( 349) 00:07:26.788 5973.858 - 5999.065: 19.6506% ( 358) 00:07:26.788 5999.065 - 6024.271: 21.5007% ( 341) 00:07:26.788 6024.271 - 6049.477: 23.5840% ( 384) 00:07:26.788 6049.477 - 6074.683: 25.5968% ( 371) 00:07:26.788 6074.683 - 6099.889: 27.6855% ( 385) 00:07:26.788 6099.889 - 6125.095: 29.7960% ( 389) 00:07:26.788 6125.095 - 6150.302: 31.8956% ( 387) 00:07:26.788 6150.302 - 6175.508: 34.0224% ( 392) 00:07:26.788 6175.508 - 6200.714: 36.1003% ( 383) 00:07:26.788 6200.714 - 6225.920: 38.2433% ( 395) 00:07:26.788 6225.920 - 6251.126: 40.3809% ( 394) 00:07:26.788 6251.126 - 6276.332: 42.5076% ( 392) 00:07:26.788 6276.332 - 6301.538: 44.6289% ( 391) 00:07:26.788 6301.538 - 6326.745: 46.8207% ( 404) 00:07:26.788 6326.745 - 6351.951: 49.0614% ( 413) 00:07:26.788 6351.951 - 6377.157: 51.3129% ( 415) 00:07:26.788 6377.157 - 6402.363: 53.5482% ( 412) 00:07:26.788 6402.363 - 6427.569: 55.8377% ( 422) 00:07:26.788 6427.569 - 6452.775: 58.1217% ( 421) 00:07:26.788 6452.775 - 6503.188: 62.5651% ( 819) 00:07:26.788 6503.188 - 6553.600: 66.4225% ( 711) 00:07:26.788 6553.600 - 6604.012: 69.3034% ( 531) 00:07:26.788 6604.012 - 6654.425: 71.3759% ( 382) 00:07:26.788 6654.425 - 6704.837: 72.8950% ( 280) 00:07:26.788 6704.837 - 6755.249: 74.1536% ( 232) 00:07:26.788 6755.249 - 6805.662: 75.2713% ( 206) 00:07:26.788 6805.662 - 6856.074: 76.1610% ( 164) 00:07:26.788 6856.074 - 6906.486: 77.0237% ( 159) 00:07:26.788 6906.486 - 6956.898: 77.8592% ( 154) 00:07:26.788 6956.898 - 7007.311: 78.6838% ( 152) 00:07:26.788 7007.311 - 7057.723: 79.4488% ( 141) 00:07:26.788 7057.723 - 7108.135: 80.1378% ( 127) 00:07:26.788 7108.135 - 7158.548: 80.7834% ( 119) 00:07:26.788 7158.548 - 7208.960: 81.3477% ( 104) 00:07:26.788 7208.960 - 7259.372: 81.8576% ( 94) 00:07:26.789 7259.372 - 7309.785: 82.3459% ( 90) 00:07:26.789 7309.785 - 7360.197: 82.7908% ( 82) 00:07:26.789 7360.197 - 7410.609: 83.2303% ( 81) 00:07:26.789 7410.609 - 7461.022: 83.6046% ( 69) 00:07:26.789 7461.022 - 7511.434: 83.9464% ( 63) 00:07:26.789 7511.434 - 7561.846: 84.2990% ( 65) 00:07:26.789 7561.846 - 7612.258: 84.6408% ( 63) 00:07:26.789 7612.258 - 7662.671: 85.0152% ( 69) 00:07:26.789 7662.671 - 7713.083: 85.3027% ( 53) 00:07:26.789 7713.083 - 7763.495: 85.6120% ( 57) 00:07:26.789 7763.495 - 7813.908: 85.9321% ( 59) 00:07:26.789 7813.908 - 7864.320: 86.2359% ( 56) 00:07:26.789 7864.320 - 7914.732: 86.5289% ( 54) 00:07:26.789 7914.732 - 7965.145: 86.7784% ( 46) 00:07:26.789 7965.145 - 8015.557: 86.9954% ( 40) 00:07:26.789 8015.557 - 8065.969: 87.2613% ( 49) 00:07:26.789 8065.969 - 8116.382: 87.4946% ( 43) 00:07:26.789 8116.382 - 8166.794: 87.7170% ( 41) 00:07:26.789 8166.794 - 8217.206: 87.9449% ( 42) 00:07:26.789 8217.206 - 8267.618: 88.1131% ( 31) 00:07:26.789 8267.618 - 8318.031: 88.2867% ( 32) 00:07:26.789 8318.031 - 8368.443: 88.4711% ( 34) 00:07:26.789 8368.443 - 8418.855: 88.6013% ( 24) 00:07:26.789 8418.855 - 8469.268: 88.7261% ( 23) 00:07:26.789 8469.268 - 8519.680: 88.8455% ( 22) 00:07:26.789 8519.680 - 8570.092: 88.9811% ( 25) 00:07:26.789 8570.092 - 8620.505: 89.0734% ( 17) 00:07:26.789 8620.505 - 8670.917: 89.1764% ( 19) 00:07:26.789 8670.917 - 8721.329: 89.3066% ( 24) 00:07:26.789 8721.329 - 8771.742: 89.4260% ( 22) 00:07:26.789 8771.742 - 8822.154: 89.5399% ( 21) 00:07:26.789 8822.154 - 8872.566: 89.6322% ( 17) 00:07:26.789 8872.566 - 8922.978: 89.7895% ( 29) 00:07:26.789 8922.978 - 8973.391: 89.9197% ( 24) 00:07:26.789 8973.391 - 9023.803: 90.0879% ( 31) 00:07:26.789 9023.803 - 9074.215: 90.2289% ( 26) 00:07:26.789 9074.215 - 9124.628: 90.3809% ( 28) 00:07:26.789 9124.628 - 9175.040: 90.5762% ( 36) 00:07:26.789 9175.040 - 9225.452: 90.7823% ( 38) 00:07:26.789 9225.452 - 9275.865: 90.9559% ( 32) 00:07:26.789 9275.865 - 9326.277: 91.1513% ( 36) 00:07:26.789 9326.277 - 9376.689: 91.3411% ( 35) 00:07:26.789 9376.689 - 9427.102: 91.5365% ( 36) 00:07:26.789 9427.102 - 9477.514: 91.7318% ( 36) 00:07:26.789 9477.514 - 9527.926: 91.9596% ( 42) 00:07:26.789 9527.926 - 9578.338: 92.2201% ( 48) 00:07:26.789 9578.338 - 9628.751: 92.4533% ( 43) 00:07:26.789 9628.751 - 9679.163: 92.6866% ( 43) 00:07:26.789 9679.163 - 9729.575: 92.9199% ( 43) 00:07:26.789 9729.575 - 9779.988: 93.1912% ( 50) 00:07:26.789 9779.988 - 9830.400: 93.4082% ( 40) 00:07:26.789 9830.400 - 9880.812: 93.6686% ( 48) 00:07:26.789 9880.812 - 9931.225: 93.8965% ( 42) 00:07:26.789 9931.225 - 9981.637: 94.1298% ( 43) 00:07:26.789 9981.637 - 10032.049: 94.3197% ( 35) 00:07:26.789 10032.049 - 10082.462: 94.4933% ( 32) 00:07:26.789 10082.462 - 10132.874: 94.6723% ( 33) 00:07:26.789 10132.874 - 10183.286: 94.8513% ( 33) 00:07:26.789 10183.286 - 10233.698: 95.0195% ( 31) 00:07:26.789 10233.698 - 10284.111: 95.1823% ( 30) 00:07:26.789 10284.111 - 10334.523: 95.3342% ( 28) 00:07:26.789 10334.523 - 10384.935: 95.4698% ( 25) 00:07:26.789 10384.935 - 10435.348: 95.5729% ( 19) 00:07:26.789 10435.348 - 10485.760: 95.6651% ( 17) 00:07:26.789 10485.760 - 10536.172: 95.7682% ( 19) 00:07:26.789 10536.172 - 10586.585: 95.8605% ( 17) 00:07:26.789 10586.585 - 10636.997: 95.9364% ( 14) 00:07:26.789 10636.997 - 10687.409: 95.9907% ( 10) 00:07:26.789 10687.409 - 10737.822: 96.0341% ( 8) 00:07:26.789 10737.822 - 10788.234: 96.0720% ( 7) 00:07:26.789 10788.234 - 10838.646: 96.1046% ( 6) 00:07:26.789 10838.646 - 10889.058: 96.1426% ( 7) 00:07:26.789 10889.058 - 10939.471: 96.1806% ( 7) 00:07:26.789 10939.471 - 10989.883: 96.2131% ( 6) 00:07:26.789 10989.883 - 11040.295: 96.2457% ( 6) 00:07:26.789 11040.295 - 11090.708: 96.2674% ( 4) 00:07:26.789 11090.708 - 11141.120: 96.2891% ( 4) 00:07:26.789 11141.120 - 11191.532: 96.3053% ( 3) 00:07:26.789 11191.532 - 11241.945: 96.3270% ( 4) 00:07:26.789 11241.945 - 11292.357: 96.3487% ( 4) 00:07:26.789 11292.357 - 11342.769: 96.4084% ( 11) 00:07:26.789 11342.769 - 11393.182: 96.4627% ( 10) 00:07:26.789 11393.182 - 11443.594: 96.5224% ( 11) 00:07:26.789 11443.594 - 11494.006: 96.5658% ( 8) 00:07:26.789 11494.006 - 11544.418: 96.6254% ( 11) 00:07:26.789 11544.418 - 11594.831: 96.7014% ( 14) 00:07:26.789 11594.831 - 11645.243: 96.7936% ( 17) 00:07:26.789 11645.243 - 11695.655: 96.8804% ( 16) 00:07:26.789 11695.655 - 11746.068: 96.9727% ( 17) 00:07:26.789 11746.068 - 11796.480: 97.0595% ( 16) 00:07:26.789 11796.480 - 11846.892: 97.1897% ( 24) 00:07:26.789 11846.892 - 11897.305: 97.3036% ( 21) 00:07:26.789 11897.305 - 11947.717: 97.4230% ( 22) 00:07:26.789 11947.717 - 11998.129: 97.5423% ( 22) 00:07:26.789 11998.129 - 12048.542: 97.6671% ( 23) 00:07:26.789 12048.542 - 12098.954: 97.7756% ( 20) 00:07:26.789 12098.954 - 12149.366: 97.9004% ( 23) 00:07:26.789 12149.366 - 12199.778: 98.0252% ( 23) 00:07:26.789 12199.778 - 12250.191: 98.1554% ( 24) 00:07:26.789 12250.191 - 12300.603: 98.2693% ( 21) 00:07:26.789 12300.603 - 12351.015: 98.3887% ( 22) 00:07:26.789 12351.015 - 12401.428: 98.4972% ( 20) 00:07:26.789 12401.428 - 12451.840: 98.6057% ( 20) 00:07:26.789 12451.840 - 12502.252: 98.6979% ( 17) 00:07:26.789 12502.252 - 12552.665: 98.8064% ( 20) 00:07:26.789 12552.665 - 12603.077: 98.8932% ( 16) 00:07:26.789 12603.077 - 12653.489: 98.9746% ( 15) 00:07:26.789 12653.489 - 12703.902: 99.0451% ( 13) 00:07:26.789 12703.902 - 12754.314: 99.1102% ( 12) 00:07:26.789 12754.314 - 12804.726: 99.1482% ( 7) 00:07:26.789 12804.726 - 12855.138: 99.1970% ( 9) 00:07:26.789 12855.138 - 12905.551: 99.2296% ( 6) 00:07:26.789 12905.551 - 13006.375: 99.2839% ( 10) 00:07:26.789 13006.375 - 13107.200: 99.3056% ( 4) 00:07:26.789 23189.662 - 23290.486: 99.3164% ( 2) 00:07:26.789 23290.486 - 23391.311: 99.3381% ( 4) 00:07:26.789 23391.311 - 23492.135: 99.3598% ( 4) 00:07:26.789 23492.135 - 23592.960: 99.3815% ( 4) 00:07:26.789 23592.960 - 23693.785: 99.4032% ( 4) 00:07:26.789 23693.785 - 23794.609: 99.4249% ( 4) 00:07:26.789 23794.609 - 23895.434: 99.4466% ( 4) 00:07:26.789 23895.434 - 23996.258: 99.4737% ( 5) 00:07:26.789 23996.258 - 24097.083: 99.4846% ( 2) 00:07:26.789 24097.083 - 24197.908: 99.5063% ( 4) 00:07:26.789 24197.908 - 24298.732: 99.5280% ( 4) 00:07:26.789 24298.732 - 24399.557: 99.5497% ( 4) 00:07:26.789 24399.557 - 24500.382: 99.5768% ( 5) 00:07:26.789 24500.382 - 24601.206: 99.5985% ( 4) 00:07:26.789 24601.206 - 24702.031: 99.6202% ( 4) 00:07:26.789 24702.031 - 24802.855: 99.6419% ( 4) 00:07:26.789 24802.855 - 24903.680: 99.6528% ( 2) 00:07:26.789 27625.945 - 27827.594: 99.6691% ( 3) 00:07:26.789 27827.594 - 28029.243: 99.7125% ( 8) 00:07:26.789 28029.243 - 28230.892: 99.7559% ( 8) 00:07:26.789 28230.892 - 28432.542: 99.7993% ( 8) 00:07:26.789 28432.542 - 28634.191: 99.8481% ( 9) 00:07:26.789 28634.191 - 28835.840: 99.8861% ( 7) 00:07:26.789 28835.840 - 29037.489: 99.9349% ( 9) 00:07:26.789 29037.489 - 29239.138: 99.9783% ( 8) 00:07:26.789 29239.138 - 29440.788: 100.0000% ( 4) 00:07:26.789 00:07:26.789 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:26.789 ============================================================================== 00:07:26.789 Range in us Cumulative IO count 00:07:26.789 5520.148 - 5545.354: 0.0054% ( 1) 00:07:26.789 5545.354 - 5570.560: 0.0163% ( 2) 00:07:26.789 5570.560 - 5595.766: 0.0271% ( 2) 00:07:26.789 5595.766 - 5620.972: 0.0488% ( 4) 00:07:26.789 5620.972 - 5646.178: 0.0868% ( 7) 00:07:26.789 5646.178 - 5671.385: 0.2116% ( 23) 00:07:26.789 5671.385 - 5696.591: 0.5534% ( 63) 00:07:26.789 5696.591 - 5721.797: 1.2153% ( 122) 00:07:26.789 5721.797 - 5747.003: 2.2298% ( 187) 00:07:26.789 5747.003 - 5772.209: 3.5482% ( 243) 00:07:26.789 5772.209 - 5797.415: 4.9967% ( 267) 00:07:26.789 5797.415 - 5822.622: 6.7166% ( 317) 00:07:26.789 5822.622 - 5847.828: 8.3116% ( 294) 00:07:26.789 5847.828 - 5873.034: 10.0260% ( 316) 00:07:26.789 5873.034 - 5898.240: 11.9575% ( 356) 00:07:26.789 5898.240 - 5923.446: 13.7967% ( 339) 00:07:26.789 5923.446 - 5948.652: 15.7498% ( 360) 00:07:26.789 5948.652 - 5973.858: 17.7246% ( 364) 00:07:26.789 5973.858 - 5999.065: 19.5909% ( 344) 00:07:26.789 5999.065 - 6024.271: 21.6417% ( 378) 00:07:26.789 6024.271 - 6049.477: 23.5569% ( 353) 00:07:26.789 6049.477 - 6074.683: 25.6022% ( 377) 00:07:26.789 6074.683 - 6099.889: 27.7398% ( 394) 00:07:26.789 6099.889 - 6125.095: 29.8394% ( 387) 00:07:26.789 6125.095 - 6150.302: 31.9282% ( 385) 00:07:26.789 6150.302 - 6175.508: 34.0386% ( 389) 00:07:26.789 6175.508 - 6200.714: 36.1816% ( 395) 00:07:26.789 6200.714 - 6225.920: 38.2758% ( 386) 00:07:26.789 6225.920 - 6251.126: 40.3537% ( 383) 00:07:26.789 6251.126 - 6276.332: 42.4750% ( 391) 00:07:26.789 6276.332 - 6301.538: 44.6343% ( 398) 00:07:26.789 6301.538 - 6326.745: 46.8207% ( 403) 00:07:26.789 6326.745 - 6351.951: 49.0940% ( 419) 00:07:26.789 6351.951 - 6377.157: 51.3509% ( 416) 00:07:26.789 6377.157 - 6402.363: 53.5536% ( 406) 00:07:26.789 6402.363 - 6427.569: 55.7780% ( 410) 00:07:26.789 6427.569 - 6452.775: 57.9915% ( 408) 00:07:26.789 6452.775 - 6503.188: 62.5000% ( 831) 00:07:26.789 6503.188 - 6553.600: 66.2109% ( 684) 00:07:26.789 6553.600 - 6604.012: 69.0430% ( 522) 00:07:26.789 6604.012 - 6654.425: 71.1317% ( 385) 00:07:26.789 6654.425 - 6704.837: 72.7051% ( 290) 00:07:26.789 6704.837 - 6755.249: 73.9529% ( 230) 00:07:26.789 6755.249 - 6805.662: 75.0543% ( 203) 00:07:26.789 6805.662 - 6856.074: 76.0688% ( 187) 00:07:26.789 6856.074 - 6906.486: 77.1105% ( 192) 00:07:26.789 6906.486 - 6956.898: 78.0382% ( 171) 00:07:26.789 6956.898 - 7007.311: 78.9931% ( 176) 00:07:26.789 7007.311 - 7057.723: 79.8665% ( 161) 00:07:26.789 7057.723 - 7108.135: 80.7237% ( 158) 00:07:26.789 7108.135 - 7158.548: 81.4996% ( 143) 00:07:26.789 7158.548 - 7208.960: 82.1235% ( 115) 00:07:26.789 7208.960 - 7259.372: 82.6877% ( 104) 00:07:26.789 7259.372 - 7309.785: 83.1651% ( 88) 00:07:26.789 7309.785 - 7360.197: 83.5720% ( 75) 00:07:26.789 7360.197 - 7410.609: 83.9518% ( 70) 00:07:26.789 7410.609 - 7461.022: 84.2719% ( 59) 00:07:26.789 7461.022 - 7511.434: 84.5649% ( 54) 00:07:26.789 7511.434 - 7561.846: 84.8362% ( 50) 00:07:26.789 7561.846 - 7612.258: 85.1454% ( 57) 00:07:26.789 7612.258 - 7662.671: 85.4221% ( 51) 00:07:26.789 7662.671 - 7713.083: 85.6608% ( 44) 00:07:26.789 7713.083 - 7763.495: 85.8724% ( 39) 00:07:26.789 7763.495 - 7813.908: 86.0894% ( 40) 00:07:26.789 7813.908 - 7864.320: 86.3010% ( 39) 00:07:26.789 7864.320 - 7914.732: 86.5180% ( 40) 00:07:26.789 7914.732 - 7965.145: 86.6699% ( 28) 00:07:26.789 7965.145 - 8015.557: 86.8001% ( 24) 00:07:26.789 8015.557 - 8065.969: 86.9737% ( 32) 00:07:26.789 8065.969 - 8116.382: 87.1148% ( 26) 00:07:26.789 8116.382 - 8166.794: 87.2504% ( 25) 00:07:26.789 8166.794 - 8217.206: 87.3915% ( 26) 00:07:26.789 8217.206 - 8267.618: 87.6085% ( 40) 00:07:26.789 8267.618 - 8318.031: 87.7767% ( 31) 00:07:26.789 8318.031 - 8368.443: 87.9395% ( 30) 00:07:26.789 8368.443 - 8418.855: 88.1022% ( 30) 00:07:26.789 8418.855 - 8469.268: 88.2541% ( 28) 00:07:26.789 8469.268 - 8519.680: 88.4060% ( 28) 00:07:26.789 8519.680 - 8570.092: 88.6013% ( 36) 00:07:26.789 8570.092 - 8620.505: 88.8075% ( 38) 00:07:26.789 8620.505 - 8670.917: 88.9648% ( 29) 00:07:26.789 8670.917 - 8721.329: 89.1385% ( 32) 00:07:26.789 8721.329 - 8771.742: 89.3880% ( 46) 00:07:26.789 8771.742 - 8822.154: 89.5779% ( 35) 00:07:26.789 8822.154 - 8872.566: 89.7841% ( 38) 00:07:26.789 8872.566 - 8922.978: 89.9902% ( 38) 00:07:26.789 8922.978 - 8973.391: 90.1530% ( 30) 00:07:26.789 8973.391 - 9023.803: 90.3483% ( 36) 00:07:26.789 9023.803 - 9074.215: 90.5545% ( 38) 00:07:26.789 9074.215 - 9124.628: 90.7444% ( 35) 00:07:26.789 9124.628 - 9175.040: 90.9180% ( 32) 00:07:26.789 9175.040 - 9225.452: 91.0753% ( 29) 00:07:26.789 9225.452 - 9275.865: 91.2543% ( 33) 00:07:26.789 9275.865 - 9326.277: 91.4280% ( 32) 00:07:26.789 9326.277 - 9376.689: 91.6178% ( 35) 00:07:26.789 9376.689 - 9427.102: 91.7806% ( 30) 00:07:26.789 9427.102 - 9477.514: 91.9434% ( 30) 00:07:26.789 9477.514 - 9527.926: 92.0844% ( 26) 00:07:26.789 9527.926 - 9578.338: 92.2635% ( 33) 00:07:26.789 9578.338 - 9628.751: 92.4262% ( 30) 00:07:26.789 9628.751 - 9679.163: 92.5998% ( 32) 00:07:26.789 9679.163 - 9729.575: 92.7734% ( 32) 00:07:26.789 9729.575 - 9779.988: 92.9525% ( 33) 00:07:26.789 9779.988 - 9830.400: 93.1532% ( 37) 00:07:26.789 9830.400 - 9880.812: 93.3322% ( 33) 00:07:26.789 9880.812 - 9931.225: 93.5167% ( 34) 00:07:26.789 9931.225 - 9981.637: 93.7066% ( 35) 00:07:26.789 9981.637 - 10032.049: 93.8856% ( 33) 00:07:26.789 10032.049 - 10082.462: 94.0647% ( 33) 00:07:26.789 10082.462 - 10132.874: 94.2383% ( 32) 00:07:26.789 10132.874 - 10183.286: 94.3848% ( 27) 00:07:26.789 10183.286 - 10233.698: 94.5312% ( 27) 00:07:26.789 10233.698 - 10284.111: 94.6832% ( 28) 00:07:26.789 10284.111 - 10334.523: 94.8242% ( 26) 00:07:26.789 10334.523 - 10384.935: 94.9653% ( 26) 00:07:26.789 10384.935 - 10435.348: 95.1118% ( 27) 00:07:26.789 10435.348 - 10485.760: 95.2365% ( 23) 00:07:26.789 10485.760 - 10536.172: 95.3776% ( 26) 00:07:26.789 10536.172 - 10586.585: 95.5078% ( 24) 00:07:26.789 10586.585 - 10636.997: 95.6272% ( 22) 00:07:26.789 10636.997 - 10687.409: 95.7303% ( 19) 00:07:26.789 10687.409 - 10737.822: 95.8116% ( 15) 00:07:26.789 10737.822 - 10788.234: 95.9039% ( 17) 00:07:26.789 10788.234 - 10838.646: 96.0015% ( 18) 00:07:26.789 10838.646 - 10889.058: 96.0720% ( 13) 00:07:26.789 10889.058 - 10939.471: 96.1372% ( 12) 00:07:26.789 10939.471 - 10989.883: 96.2077% ( 13) 00:07:26.789 10989.883 - 11040.295: 96.2457% ( 7) 00:07:26.789 11040.295 - 11090.708: 96.3053% ( 11) 00:07:26.789 11090.708 - 11141.120: 96.3921% ( 16) 00:07:26.789 11141.120 - 11191.532: 96.4464% ( 10) 00:07:26.789 11191.532 - 11241.945: 96.5224% ( 14) 00:07:26.789 11241.945 - 11292.357: 96.6092% ( 16) 00:07:26.789 11292.357 - 11342.769: 96.7068% ( 18) 00:07:26.789 11342.769 - 11393.182: 96.7882% ( 15) 00:07:26.789 11393.182 - 11443.594: 96.8424% ( 10) 00:07:26.789 11443.594 - 11494.006: 96.8913% ( 9) 00:07:26.789 11494.006 - 11544.418: 96.9347% ( 8) 00:07:26.789 11544.418 - 11594.831: 96.9889% ( 10) 00:07:26.789 11594.831 - 11645.243: 97.0974% ( 20) 00:07:26.789 11645.243 - 11695.655: 97.1734% ( 14) 00:07:26.789 11695.655 - 11746.068: 97.2548% ( 15) 00:07:26.789 11746.068 - 11796.480: 97.3524% ( 18) 00:07:26.789 11796.480 - 11846.892: 97.4555% ( 19) 00:07:26.789 11846.892 - 11897.305: 97.5423% ( 16) 00:07:26.789 11897.305 - 11947.717: 97.6291% ( 16) 00:07:26.789 11947.717 - 11998.129: 97.7376% ( 20) 00:07:26.789 11998.129 - 12048.542: 97.8461% ( 20) 00:07:26.789 12048.542 - 12098.954: 97.9492% ( 19) 00:07:26.789 12098.954 - 12149.366: 98.0577% ( 20) 00:07:26.789 12149.366 - 12199.778: 98.1391% ( 15) 00:07:26.789 12199.778 - 12250.191: 98.2259% ( 16) 00:07:26.789 12250.191 - 12300.603: 98.3236% ( 18) 00:07:26.789 12300.603 - 12351.015: 98.4158% ( 17) 00:07:26.789 12351.015 - 12401.428: 98.5189% ( 19) 00:07:26.789 12401.428 - 12451.840: 98.6057% ( 16) 00:07:26.789 12451.840 - 12502.252: 98.6925% ( 16) 00:07:26.789 12502.252 - 12552.665: 98.7793% ( 16) 00:07:26.789 12552.665 - 12603.077: 98.8661% ( 16) 00:07:26.789 12603.077 - 12653.489: 98.9366% ( 13) 00:07:26.789 12653.489 - 12703.902: 98.9963% ( 11) 00:07:26.789 12703.902 - 12754.314: 99.0506% ( 10) 00:07:26.789 12754.314 - 12804.726: 99.0994% ( 9) 00:07:26.789 12804.726 - 12855.138: 99.1319% ( 6) 00:07:26.789 12855.138 - 12905.551: 99.1591% ( 5) 00:07:26.789 12905.551 - 13006.375: 99.2079% ( 9) 00:07:26.789 13006.375 - 13107.200: 99.2459% ( 7) 00:07:26.789 13107.200 - 13208.025: 99.2784% ( 6) 00:07:26.789 13208.025 - 13308.849: 99.3056% ( 5) 00:07:26.789 21778.117 - 21878.942: 99.3110% ( 1) 00:07:26.789 21878.942 - 21979.766: 99.3327% ( 4) 00:07:26.789 21979.766 - 22080.591: 99.3544% ( 4) 00:07:26.789 22080.591 - 22181.415: 99.3761% ( 4) 00:07:26.789 22181.415 - 22282.240: 99.3978% ( 4) 00:07:26.789 22282.240 - 22383.065: 99.4195% ( 4) 00:07:26.789 22383.065 - 22483.889: 99.4412% ( 4) 00:07:26.789 22483.889 - 22584.714: 99.4629% ( 4) 00:07:26.789 22584.714 - 22685.538: 99.4900% ( 5) 00:07:26.789 22685.538 - 22786.363: 99.5117% ( 4) 00:07:26.789 22786.363 - 22887.188: 99.5334% ( 4) 00:07:26.789 22887.188 - 22988.012: 99.5605% ( 5) 00:07:26.789 22988.012 - 23088.837: 99.5822% ( 4) 00:07:26.789 23088.837 - 23189.662: 99.6039% ( 4) 00:07:26.789 23189.662 - 23290.486: 99.6257% ( 4) 00:07:26.789 23290.486 - 23391.311: 99.6528% ( 5) 00:07:26.789 26214.400 - 26416.049: 99.6745% ( 4) 00:07:26.789 26416.049 - 26617.698: 99.7233% ( 9) 00:07:26.789 26617.698 - 26819.348: 99.7667% ( 8) 00:07:26.789 26819.348 - 27020.997: 99.8101% ( 8) 00:07:26.789 27020.997 - 27222.646: 99.8589% ( 9) 00:07:26.789 27222.646 - 27424.295: 99.9023% ( 8) 00:07:26.789 27424.295 - 27625.945: 99.9512% ( 9) 00:07:26.789 27625.945 - 27827.594: 100.0000% ( 9) 00:07:26.789 00:07:26.789 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:26.790 ============================================================================== 00:07:26.790 Range in us Cumulative IO count 00:07:26.790 5570.560 - 5595.766: 0.0109% ( 2) 00:07:26.790 5595.766 - 5620.972: 0.0651% ( 10) 00:07:26.790 5620.972 - 5646.178: 0.1790% ( 21) 00:07:26.790 5646.178 - 5671.385: 0.3472% ( 31) 00:07:26.790 5671.385 - 5696.591: 0.7216% ( 69) 00:07:26.790 5696.591 - 5721.797: 1.1990% ( 88) 00:07:26.790 5721.797 - 5747.003: 2.2027% ( 185) 00:07:26.790 5747.003 - 5772.209: 3.5211% ( 243) 00:07:26.790 5772.209 - 5797.415: 5.0781% ( 287) 00:07:26.790 5797.415 - 5822.622: 6.6569% ( 291) 00:07:26.790 5822.622 - 5847.828: 8.4364% ( 328) 00:07:26.790 5847.828 - 5873.034: 10.1617% ( 318) 00:07:26.790 5873.034 - 5898.240: 11.9032% ( 321) 00:07:26.790 5898.240 - 5923.446: 13.7478% ( 340) 00:07:26.790 5923.446 - 5948.652: 15.6955% ( 359) 00:07:26.790 5948.652 - 5973.858: 17.5401% ( 340) 00:07:26.790 5973.858 - 5999.065: 19.4824% ( 358) 00:07:26.790 5999.065 - 6024.271: 21.4735% ( 367) 00:07:26.790 6024.271 - 6049.477: 23.4212% ( 359) 00:07:26.790 6049.477 - 6074.683: 25.3689% ( 359) 00:07:26.790 6074.683 - 6099.889: 27.3438% ( 364) 00:07:26.790 6099.889 - 6125.095: 29.4000% ( 379) 00:07:26.790 6125.095 - 6150.302: 31.5104% ( 389) 00:07:26.790 6150.302 - 6175.508: 33.5612% ( 378) 00:07:26.790 6175.508 - 6200.714: 35.7368% ( 401) 00:07:26.790 6200.714 - 6225.920: 37.8906% ( 397) 00:07:26.790 6225.920 - 6251.126: 39.9306% ( 376) 00:07:26.790 6251.126 - 6276.332: 42.0464% ( 390) 00:07:26.790 6276.332 - 6301.538: 44.1352% ( 385) 00:07:26.790 6301.538 - 6326.745: 46.2674% ( 393) 00:07:26.790 6326.745 - 6351.951: 48.3832% ( 390) 00:07:26.790 6351.951 - 6377.157: 50.6565% ( 419) 00:07:26.790 6377.157 - 6402.363: 52.8863% ( 411) 00:07:26.790 6402.363 - 6427.569: 55.2192% ( 430) 00:07:26.790 6427.569 - 6452.775: 57.5467% ( 429) 00:07:26.790 6452.775 - 6503.188: 61.9683% ( 815) 00:07:26.790 6503.188 - 6553.600: 65.7335% ( 694) 00:07:26.790 6553.600 - 6604.012: 68.5764% ( 524) 00:07:26.790 6604.012 - 6654.425: 70.6543% ( 383) 00:07:26.790 6654.425 - 6704.837: 72.2548% ( 295) 00:07:26.790 6704.837 - 6755.249: 73.5731% ( 243) 00:07:26.790 6755.249 - 6805.662: 74.7450% ( 216) 00:07:26.790 6805.662 - 6856.074: 75.7758% ( 190) 00:07:26.790 6856.074 - 6906.486: 76.7470% ( 179) 00:07:26.790 6906.486 - 6956.898: 77.6693% ( 170) 00:07:26.790 6956.898 - 7007.311: 78.5970% ( 171) 00:07:26.790 7007.311 - 7057.723: 79.4054% ( 149) 00:07:26.790 7057.723 - 7108.135: 80.2300% ( 152) 00:07:26.790 7108.135 - 7158.548: 81.0221% ( 146) 00:07:26.790 7158.548 - 7208.960: 81.7708% ( 138) 00:07:26.790 7208.960 - 7259.372: 82.3568% ( 108) 00:07:26.790 7259.372 - 7309.785: 82.7854% ( 79) 00:07:26.790 7309.785 - 7360.197: 83.1706% ( 71) 00:07:26.790 7360.197 - 7410.609: 83.5612% ( 72) 00:07:26.790 7410.609 - 7461.022: 83.9355% ( 69) 00:07:26.790 7461.022 - 7511.434: 84.2665% ( 61) 00:07:26.790 7511.434 - 7561.846: 84.5269% ( 48) 00:07:26.790 7561.846 - 7612.258: 84.7711% ( 45) 00:07:26.790 7612.258 - 7662.671: 85.0098% ( 44) 00:07:26.790 7662.671 - 7713.083: 85.2593% ( 46) 00:07:26.790 7713.083 - 7763.495: 85.4601% ( 37) 00:07:26.790 7763.495 - 7813.908: 85.6608% ( 37) 00:07:26.790 7813.908 - 7864.320: 85.8615% ( 37) 00:07:26.790 7864.320 - 7914.732: 86.0569% ( 36) 00:07:26.790 7914.732 - 7965.145: 86.2467% ( 35) 00:07:26.790 7965.145 - 8015.557: 86.3661% ( 22) 00:07:26.790 8015.557 - 8065.969: 86.5180% ( 28) 00:07:26.790 8065.969 - 8116.382: 86.6970% ( 33) 00:07:26.790 8116.382 - 8166.794: 86.8761% ( 33) 00:07:26.790 8166.794 - 8217.206: 87.1148% ( 44) 00:07:26.790 8217.206 - 8267.618: 87.3318% ( 40) 00:07:26.790 8267.618 - 8318.031: 87.5543% ( 41) 00:07:26.790 8318.031 - 8368.443: 87.7713% ( 40) 00:07:26.790 8368.443 - 8418.855: 87.9774% ( 38) 00:07:26.790 8418.855 - 8469.268: 88.1890% ( 39) 00:07:26.790 8469.268 - 8519.680: 88.3843% ( 36) 00:07:26.790 8519.680 - 8570.092: 88.5851% ( 37) 00:07:26.790 8570.092 - 8620.505: 88.7804% ( 36) 00:07:26.790 8620.505 - 8670.917: 88.9974% ( 40) 00:07:26.790 8670.917 - 8721.329: 89.2198% ( 41) 00:07:26.790 8721.329 - 8771.742: 89.4314% ( 39) 00:07:26.790 8771.742 - 8822.154: 89.6539% ( 41) 00:07:26.790 8822.154 - 8872.566: 89.8763% ( 41) 00:07:26.790 8872.566 - 8922.978: 90.1042% ( 42) 00:07:26.790 8922.978 - 8973.391: 90.3320% ( 42) 00:07:26.790 8973.391 - 9023.803: 90.5545% ( 41) 00:07:26.790 9023.803 - 9074.215: 90.7823% ( 42) 00:07:26.790 9074.215 - 9124.628: 90.9614% ( 33) 00:07:26.790 9124.628 - 9175.040: 91.1621% ( 37) 00:07:26.790 9175.040 - 9225.452: 91.3900% ( 42) 00:07:26.790 9225.452 - 9275.865: 91.5799% ( 35) 00:07:26.790 9275.865 - 9326.277: 91.7860% ( 38) 00:07:26.790 9326.277 - 9376.689: 92.0030% ( 40) 00:07:26.790 9376.689 - 9427.102: 92.2038% ( 37) 00:07:26.790 9427.102 - 9477.514: 92.4099% ( 38) 00:07:26.790 9477.514 - 9527.926: 92.5998% ( 35) 00:07:26.790 9527.926 - 9578.338: 92.7734% ( 32) 00:07:26.790 9578.338 - 9628.751: 92.9308% ( 29) 00:07:26.790 9628.751 - 9679.163: 93.0556% ( 23) 00:07:26.790 9679.163 - 9729.575: 93.1478% ( 17) 00:07:26.790 9729.575 - 9779.988: 93.2454% ( 18) 00:07:26.790 9779.988 - 9830.400: 93.3485% ( 19) 00:07:26.790 9830.400 - 9880.812: 93.4408% ( 17) 00:07:26.790 9880.812 - 9931.225: 93.5438% ( 19) 00:07:26.790 9931.225 - 9981.637: 93.6361% ( 17) 00:07:26.790 9981.637 - 10032.049: 93.7446% ( 20) 00:07:26.790 10032.049 - 10082.462: 93.8748% ( 24) 00:07:26.790 10082.462 - 10132.874: 93.9887% ( 21) 00:07:26.790 10132.874 - 10183.286: 94.1243% ( 25) 00:07:26.790 10183.286 - 10233.698: 94.2600% ( 25) 00:07:26.790 10233.698 - 10284.111: 94.4173% ( 29) 00:07:26.790 10284.111 - 10334.523: 94.5692% ( 28) 00:07:26.790 10334.523 - 10384.935: 94.6940% ( 23) 00:07:26.790 10384.935 - 10435.348: 94.8785% ( 34) 00:07:26.790 10435.348 - 10485.760: 95.0412% ( 30) 00:07:26.790 10485.760 - 10536.172: 95.1606% ( 22) 00:07:26.790 10536.172 - 10586.585: 95.2799% ( 22) 00:07:26.790 10586.585 - 10636.997: 95.4102% ( 24) 00:07:26.790 10636.997 - 10687.409: 95.5458% ( 25) 00:07:26.790 10687.409 - 10737.822: 95.6489% ( 19) 00:07:26.790 10737.822 - 10788.234: 95.7520% ( 19) 00:07:26.790 10788.234 - 10838.646: 95.8984% ( 27) 00:07:26.790 10838.646 - 10889.058: 96.0395% ( 26) 00:07:26.790 10889.058 - 10939.471: 96.1643% ( 23) 00:07:26.790 10939.471 - 10989.883: 96.2619% ( 18) 00:07:26.790 10989.883 - 11040.295: 96.3759% ( 21) 00:07:26.790 11040.295 - 11090.708: 96.4952% ( 22) 00:07:26.790 11090.708 - 11141.120: 96.5820% ( 16) 00:07:26.790 11141.120 - 11191.532: 96.6526% ( 13) 00:07:26.790 11191.532 - 11241.945: 96.7122% ( 11) 00:07:26.790 11241.945 - 11292.357: 96.8045% ( 17) 00:07:26.790 11292.357 - 11342.769: 96.9238% ( 22) 00:07:26.790 11342.769 - 11393.182: 97.0378% ( 21) 00:07:26.790 11393.182 - 11443.594: 97.1517% ( 21) 00:07:26.790 11443.594 - 11494.006: 97.2602% ( 20) 00:07:26.790 11494.006 - 11544.418: 97.3850% ( 23) 00:07:26.790 11544.418 - 11594.831: 97.5098% ( 23) 00:07:26.790 11594.831 - 11645.243: 97.6400% ( 24) 00:07:26.790 11645.243 - 11695.655: 97.7756% ( 25) 00:07:26.790 11695.655 - 11746.068: 97.8787% ( 19) 00:07:26.790 11746.068 - 11796.480: 97.9709% ( 17) 00:07:26.790 11796.480 - 11846.892: 98.0632% ( 17) 00:07:26.790 11846.892 - 11897.305: 98.1554% ( 17) 00:07:26.790 11897.305 - 11947.717: 98.2585% ( 19) 00:07:26.790 11947.717 - 11998.129: 98.3453% ( 16) 00:07:26.790 11998.129 - 12048.542: 98.4321% ( 16) 00:07:26.790 12048.542 - 12098.954: 98.5026% ( 13) 00:07:26.790 12098.954 - 12149.366: 98.5569% ( 10) 00:07:26.790 12149.366 - 12199.778: 98.6328% ( 14) 00:07:26.790 12199.778 - 12250.191: 98.6871% ( 10) 00:07:26.790 12250.191 - 12300.603: 98.7305% ( 8) 00:07:26.790 12300.603 - 12351.015: 98.7956% ( 12) 00:07:26.790 12351.015 - 12401.428: 98.8390% ( 8) 00:07:26.790 12401.428 - 12451.840: 98.8932% ( 10) 00:07:26.790 12451.840 - 12502.252: 98.9312% ( 7) 00:07:26.790 12502.252 - 12552.665: 98.9746% ( 8) 00:07:26.790 12552.665 - 12603.077: 99.0017% ( 5) 00:07:26.790 12603.077 - 12653.489: 99.0289% ( 5) 00:07:26.790 12653.489 - 12703.902: 99.0560% ( 5) 00:07:26.790 12703.902 - 12754.314: 99.0885% ( 6) 00:07:26.790 12754.314 - 12804.726: 99.1211% ( 6) 00:07:26.790 12804.726 - 12855.138: 99.1428% ( 4) 00:07:26.790 12855.138 - 12905.551: 99.1753% ( 6) 00:07:26.790 12905.551 - 13006.375: 99.2242% ( 9) 00:07:26.790 13006.375 - 13107.200: 99.2567% ( 6) 00:07:26.790 13107.200 - 13208.025: 99.2730% ( 3) 00:07:26.790 13208.025 - 13308.849: 99.2893% ( 3) 00:07:26.790 13308.849 - 13409.674: 99.3056% ( 3) 00:07:26.790 20064.098 - 20164.923: 99.3164% ( 2) 00:07:26.790 20164.923 - 20265.748: 99.3381% ( 4) 00:07:26.790 20265.748 - 20366.572: 99.3598% ( 4) 00:07:26.790 20366.572 - 20467.397: 99.3815% ( 4) 00:07:26.790 20467.397 - 20568.222: 99.4086% ( 5) 00:07:26.790 20568.222 - 20669.046: 99.4303% ( 4) 00:07:26.790 20669.046 - 20769.871: 99.4520% ( 4) 00:07:26.790 20769.871 - 20870.695: 99.4737% ( 4) 00:07:26.790 20870.695 - 20971.520: 99.4954% ( 4) 00:07:26.790 20971.520 - 21072.345: 99.5171% ( 4) 00:07:26.790 21072.345 - 21173.169: 99.5388% ( 4) 00:07:26.790 21173.169 - 21273.994: 99.5605% ( 4) 00:07:26.790 21273.994 - 21374.818: 99.5877% ( 5) 00:07:26.790 21374.818 - 21475.643: 99.6094% ( 4) 00:07:26.790 21475.643 - 21576.468: 99.6311% ( 4) 00:07:26.790 21576.468 - 21677.292: 99.6528% ( 4) 00:07:26.790 24500.382 - 24601.206: 99.6636% ( 2) 00:07:26.790 24601.206 - 24702.031: 99.6908% ( 5) 00:07:26.790 24702.031 - 24802.855: 99.7125% ( 4) 00:07:26.790 24802.855 - 24903.680: 99.7342% ( 4) 00:07:26.790 24903.680 - 25004.505: 99.7559% ( 4) 00:07:26.790 25004.505 - 25105.329: 99.7776% ( 4) 00:07:26.790 25105.329 - 25206.154: 99.7993% ( 4) 00:07:26.790 25206.154 - 25306.978: 99.8210% ( 4) 00:07:26.790 25306.978 - 25407.803: 99.8427% ( 4) 00:07:26.790 25407.803 - 25508.628: 99.8698% ( 5) 00:07:26.790 25508.628 - 25609.452: 99.8915% ( 4) 00:07:26.790 25609.452 - 25710.277: 99.9132% ( 4) 00:07:26.790 25710.277 - 25811.102: 99.9349% ( 4) 00:07:26.790 25811.102 - 26012.751: 99.9837% ( 9) 00:07:26.790 26012.751 - 26214.400: 100.0000% ( 3) 00:07:26.790 00:07:26.790 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:26.790 ============================================================================== 00:07:26.790 Range in us Cumulative IO count 00:07:26.790 5545.354 - 5570.560: 0.0108% ( 2) 00:07:26.790 5570.560 - 5595.766: 0.0162% ( 1) 00:07:26.790 5595.766 - 5620.972: 0.0378% ( 4) 00:07:26.790 5620.972 - 5646.178: 0.0919% ( 10) 00:07:26.790 5646.178 - 5671.385: 0.2649% ( 32) 00:07:26.790 5671.385 - 5696.591: 0.5677% ( 56) 00:07:26.790 5696.591 - 5721.797: 1.1894% ( 115) 00:07:26.790 5721.797 - 5747.003: 2.2005% ( 187) 00:07:26.790 5747.003 - 5772.209: 3.4115% ( 224) 00:07:26.790 5772.209 - 5797.415: 4.9362% ( 282) 00:07:26.790 5797.415 - 5822.622: 6.5041% ( 290) 00:07:26.790 5822.622 - 5847.828: 8.2288% ( 319) 00:07:26.790 5847.828 - 5873.034: 10.0076% ( 329) 00:07:26.790 5873.034 - 5898.240: 11.8891% ( 348) 00:07:26.790 5898.240 - 5923.446: 13.8733% ( 367) 00:07:26.790 5923.446 - 5948.652: 15.7494% ( 347) 00:07:26.790 5948.652 - 5973.858: 17.6471% ( 351) 00:07:26.790 5973.858 - 5999.065: 19.4853% ( 340) 00:07:26.790 5999.065 - 6024.271: 21.4641% ( 366) 00:07:26.790 6024.271 - 6049.477: 23.5132% ( 379) 00:07:26.790 6049.477 - 6074.683: 25.4812% ( 364) 00:07:26.790 6074.683 - 6099.889: 27.5573% ( 384) 00:07:26.790 6099.889 - 6125.095: 29.5902% ( 376) 00:07:26.790 6125.095 - 6150.302: 31.6014% ( 372) 00:07:26.790 6150.302 - 6175.508: 33.6289% ( 375) 00:07:26.790 6175.508 - 6200.714: 35.6618% ( 376) 00:07:26.790 6200.714 - 6225.920: 37.7757% ( 391) 00:07:26.790 6225.920 - 6251.126: 40.0141% ( 414) 00:07:26.790 6251.126 - 6276.332: 42.1713% ( 399) 00:07:26.790 6276.332 - 6301.538: 44.3231% ( 398) 00:07:26.790 6301.538 - 6326.745: 46.5722% ( 416) 00:07:26.790 6326.745 - 6351.951: 48.7457% ( 402) 00:07:26.790 6351.951 - 6377.157: 50.9462% ( 407) 00:07:26.790 6377.157 - 6402.363: 53.1737% ( 412) 00:07:26.790 6402.363 - 6427.569: 55.4336% ( 418) 00:07:26.790 6427.569 - 6452.775: 57.7098% ( 421) 00:07:26.790 6452.775 - 6503.188: 62.1486% ( 821) 00:07:26.790 6503.188 - 6553.600: 65.8575% ( 686) 00:07:26.790 6553.600 - 6604.012: 68.8365% ( 551) 00:07:26.790 6604.012 - 6654.425: 70.8369% ( 370) 00:07:26.790 6654.425 - 6704.837: 72.3292% ( 276) 00:07:26.790 6704.837 - 6755.249: 73.6213% ( 239) 00:07:26.790 6755.249 - 6805.662: 74.7243% ( 204) 00:07:26.790 6805.662 - 6856.074: 75.6704% ( 175) 00:07:26.790 6856.074 - 6906.486: 76.5625% ( 165) 00:07:26.790 6906.486 - 6956.898: 77.4221% ( 159) 00:07:26.790 6956.898 - 7007.311: 78.3899% ( 179) 00:07:26.790 7007.311 - 7057.723: 79.1955% ( 149) 00:07:26.790 7057.723 - 7108.135: 79.9849% ( 146) 00:07:26.790 7108.135 - 7158.548: 80.6985% ( 132) 00:07:26.790 7158.548 - 7208.960: 81.2878% ( 109) 00:07:26.790 7208.960 - 7259.372: 81.7690% ( 89) 00:07:26.790 7259.372 - 7309.785: 82.1853% ( 77) 00:07:26.790 7309.785 - 7360.197: 82.5746% ( 72) 00:07:26.790 7360.197 - 7410.609: 82.9098% ( 62) 00:07:26.790 7410.609 - 7461.022: 83.1964% ( 53) 00:07:26.790 7461.022 - 7511.434: 83.4775% ( 52) 00:07:26.790 7511.434 - 7561.846: 83.7641% ( 53) 00:07:26.790 7561.846 - 7612.258: 84.0398% ( 51) 00:07:26.790 7612.258 - 7662.671: 84.2831% ( 45) 00:07:26.790 7662.671 - 7713.083: 84.5264% ( 45) 00:07:26.790 7713.083 - 7763.495: 84.7426% ( 40) 00:07:26.790 7763.495 - 7813.908: 85.0724% ( 61) 00:07:26.790 7813.908 - 7864.320: 85.3266% ( 47) 00:07:26.790 7864.320 - 7914.732: 85.5266% ( 37) 00:07:26.790 7914.732 - 7965.145: 85.7807% ( 47) 00:07:26.790 7965.145 - 8015.557: 85.9916% ( 39) 00:07:26.790 8015.557 - 8065.969: 86.1862% ( 36) 00:07:26.790 8065.969 - 8116.382: 86.4349% ( 46) 00:07:26.790 8116.382 - 8166.794: 86.6890% ( 47) 00:07:26.790 8166.794 - 8217.206: 86.9593% ( 50) 00:07:26.790 8217.206 - 8267.618: 87.2351% ( 51) 00:07:26.790 8267.618 - 8318.031: 87.4838% ( 46) 00:07:26.790 8318.031 - 8368.443: 87.7541% ( 50) 00:07:26.790 8368.443 - 8418.855: 88.0082% ( 47) 00:07:26.790 8418.855 - 8469.268: 88.2677% ( 48) 00:07:26.790 8469.268 - 8519.680: 88.5272% ( 48) 00:07:26.790 8519.680 - 8570.092: 88.8462% ( 59) 00:07:26.790 8570.092 - 8620.505: 89.0949% ( 46) 00:07:26.790 8620.505 - 8670.917: 89.4085% ( 58) 00:07:26.790 8670.917 - 8721.329: 89.7059% ( 55) 00:07:26.790 8721.329 - 8771.742: 89.9924% ( 53) 00:07:26.790 8771.742 - 8822.154: 90.2465% ( 47) 00:07:26.790 8822.154 - 8872.566: 90.5006% ( 47) 00:07:26.790 8872.566 - 8922.978: 90.7439% ( 45) 00:07:26.790 8922.978 - 8973.391: 90.9602% ( 40) 00:07:26.790 8973.391 - 9023.803: 91.1927% ( 43) 00:07:26.790 9023.803 - 9074.215: 91.3981% ( 38) 00:07:26.790 9074.215 - 9124.628: 91.5441% ( 27) 00:07:26.790 9124.628 - 9175.040: 91.6739% ( 24) 00:07:26.790 9175.040 - 9225.452: 91.7820% ( 20) 00:07:26.790 9225.452 - 9275.865: 91.9118% ( 24) 00:07:26.790 9275.865 - 9326.277: 92.0307% ( 22) 00:07:26.790 9326.277 - 9376.689: 92.1280% ( 18) 00:07:26.790 9376.689 - 9427.102: 92.2308% ( 19) 00:07:26.790 9427.102 - 9477.514: 92.3281% ( 18) 00:07:26.790 9477.514 - 9527.926: 92.4092% ( 15) 00:07:26.790 9527.926 - 9578.338: 92.5011% ( 17) 00:07:26.790 9578.338 - 9628.751: 92.5984% ( 18) 00:07:26.790 9628.751 - 9679.163: 92.6795% ( 15) 00:07:26.790 9679.163 - 9729.575: 92.7876% ( 20) 00:07:26.790 9729.575 - 9779.988: 92.8958% ( 20) 00:07:26.790 9779.988 - 9830.400: 93.0471% ( 28) 00:07:26.790 9830.400 - 9880.812: 93.1553% ( 20) 00:07:26.790 9880.812 - 9931.225: 93.2850% ( 24) 00:07:26.790 9931.225 - 9981.637: 93.4310% ( 27) 00:07:26.790 9981.637 - 10032.049: 93.5662% ( 25) 00:07:26.790 10032.049 - 10082.462: 93.7392% ( 32) 00:07:26.790 10082.462 - 10132.874: 93.8798% ( 26) 00:07:26.790 10132.874 - 10183.286: 94.0149% ( 25) 00:07:26.790 10183.286 - 10233.698: 94.1176% ( 19) 00:07:26.790 10233.698 - 10284.111: 94.2744% ( 29) 00:07:26.790 10284.111 - 10334.523: 94.3988% ( 23) 00:07:26.790 10334.523 - 10384.935: 94.5556% ( 29) 00:07:26.790 10384.935 - 10435.348: 94.6799% ( 23) 00:07:26.790 10435.348 - 10485.760: 94.8421% ( 30) 00:07:26.790 10485.760 - 10536.172: 95.0151% ( 32) 00:07:26.790 10536.172 - 10586.585: 95.1990% ( 34) 00:07:26.790 10586.585 - 10636.997: 95.3395% ( 26) 00:07:26.790 10636.997 - 10687.409: 95.4585% ( 22) 00:07:26.791 10687.409 - 10737.822: 95.5720% ( 21) 00:07:26.791 10737.822 - 10788.234: 95.6801% ( 20) 00:07:26.791 10788.234 - 10838.646: 95.8045% ( 23) 00:07:26.791 10838.646 - 10889.058: 95.9234% ( 22) 00:07:26.791 10889.058 - 10939.471: 96.0316% ( 20) 00:07:26.791 10939.471 - 10989.883: 96.1559% ( 23) 00:07:26.791 10989.883 - 11040.295: 96.3235% ( 31) 00:07:26.791 11040.295 - 11090.708: 96.4425% ( 22) 00:07:26.791 11090.708 - 11141.120: 96.5614% ( 22) 00:07:26.791 11141.120 - 11191.532: 96.6966% ( 25) 00:07:26.791 11191.532 - 11241.945: 96.8101% ( 21) 00:07:26.791 11241.945 - 11292.357: 96.9669% ( 29) 00:07:26.791 11292.357 - 11342.769: 97.1507% ( 34) 00:07:26.791 11342.769 - 11393.182: 97.3129% ( 30) 00:07:26.791 11393.182 - 11443.594: 97.4427% ( 24) 00:07:26.791 11443.594 - 11494.006: 97.5454% ( 19) 00:07:26.791 11494.006 - 11544.418: 97.6968% ( 28) 00:07:26.791 11544.418 - 11594.831: 97.8590% ( 30) 00:07:26.791 11594.831 - 11645.243: 98.0374% ( 33) 00:07:26.791 11645.243 - 11695.655: 98.1510% ( 21) 00:07:26.791 11695.655 - 11746.068: 98.2483% ( 18) 00:07:26.791 11746.068 - 11796.480: 98.3186% ( 13) 00:07:26.791 11796.480 - 11846.892: 98.3888% ( 13) 00:07:26.791 11846.892 - 11897.305: 98.4537% ( 12) 00:07:26.791 11897.305 - 11947.717: 98.5132% ( 11) 00:07:26.791 11947.717 - 11998.129: 98.5835% ( 13) 00:07:26.791 11998.129 - 12048.542: 98.6646% ( 15) 00:07:26.791 12048.542 - 12098.954: 98.7078% ( 8) 00:07:26.791 12098.954 - 12149.366: 98.7457% ( 7) 00:07:26.791 12149.366 - 12199.778: 98.7727% ( 5) 00:07:26.791 12199.778 - 12250.191: 98.8051% ( 6) 00:07:26.791 12250.191 - 12300.603: 98.8592% ( 10) 00:07:26.791 12300.603 - 12351.015: 98.9079% ( 9) 00:07:26.791 12351.015 - 12401.428: 98.9457% ( 7) 00:07:26.791 12401.428 - 12451.840: 98.9728% ( 5) 00:07:26.791 12451.840 - 12502.252: 99.0052% ( 6) 00:07:26.791 12502.252 - 12552.665: 99.0376% ( 6) 00:07:26.791 12552.665 - 12603.077: 99.0647% ( 5) 00:07:26.791 12603.077 - 12653.489: 99.0971% ( 6) 00:07:26.791 12653.489 - 12703.902: 99.1295% ( 6) 00:07:26.791 12703.902 - 12754.314: 99.1620% ( 6) 00:07:26.791 12754.314 - 12804.726: 99.1944% ( 6) 00:07:26.791 12804.726 - 12855.138: 99.2160% ( 4) 00:07:26.791 12855.138 - 12905.551: 99.2377% ( 4) 00:07:26.791 12905.551 - 13006.375: 99.2755% ( 7) 00:07:26.791 13006.375 - 13107.200: 99.3080% ( 6) 00:07:26.791 15022.868 - 15123.692: 99.3296% ( 4) 00:07:26.791 15123.692 - 15224.517: 99.3512% ( 4) 00:07:26.791 15224.517 - 15325.342: 99.3728% ( 4) 00:07:26.791 15325.342 - 15426.166: 99.3945% ( 4) 00:07:26.791 15426.166 - 15526.991: 99.4161% ( 4) 00:07:26.791 15526.991 - 15627.815: 99.4377% ( 4) 00:07:26.791 15627.815 - 15728.640: 99.4647% ( 5) 00:07:26.791 15728.640 - 15829.465: 99.4864% ( 4) 00:07:26.791 15829.465 - 15930.289: 99.5080% ( 4) 00:07:26.791 15930.289 - 16031.114: 99.5296% ( 4) 00:07:26.791 16031.114 - 16131.938: 99.5513% ( 4) 00:07:26.791 16131.938 - 16232.763: 99.5729% ( 4) 00:07:26.791 16232.763 - 16333.588: 99.5999% ( 5) 00:07:26.791 16333.588 - 16434.412: 99.6215% ( 4) 00:07:26.791 16434.412 - 16535.237: 99.6432% ( 4) 00:07:26.791 16535.237 - 16636.062: 99.6540% ( 2) 00:07:26.791 19459.151 - 19559.975: 99.6702% ( 3) 00:07:26.791 19559.975 - 19660.800: 99.6918% ( 4) 00:07:26.791 19660.800 - 19761.625: 99.7135% ( 4) 00:07:26.791 19761.625 - 19862.449: 99.7405% ( 5) 00:07:26.791 19862.449 - 19963.274: 99.7621% ( 4) 00:07:26.791 19963.274 - 20064.098: 99.7837% ( 4) 00:07:26.791 20064.098 - 20164.923: 99.8054% ( 4) 00:07:26.791 20164.923 - 20265.748: 99.8270% ( 4) 00:07:26.791 20265.748 - 20366.572: 99.8540% ( 5) 00:07:26.791 20366.572 - 20467.397: 99.8756% ( 4) 00:07:26.791 20467.397 - 20568.222: 99.8973% ( 4) 00:07:26.791 20568.222 - 20669.046: 99.9189% ( 4) 00:07:26.791 20669.046 - 20769.871: 99.9405% ( 4) 00:07:26.791 20769.871 - 20870.695: 99.9622% ( 4) 00:07:26.791 20870.695 - 20971.520: 99.9892% ( 5) 00:07:26.791 20971.520 - 21072.345: 100.0000% ( 2) 00:07:26.791 00:07:26.791 09:40:05 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:27.730 Initializing NVMe Controllers 00:07:27.730 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:27.730 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:27.730 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:27.730 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:27.730 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:27.730 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:27.730 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:27.730 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:27.730 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:27.730 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:27.730 Initialization complete. Launching workers. 00:07:27.730 ======================================================== 00:07:27.730 Latency(us) 00:07:27.730 Device Information : IOPS MiB/s Average min max 00:07:27.730 PCIE (0000:00:13.0) NSID 1 from core 0: 18042.52 211.44 7104.47 5545.35 31282.30 00:07:27.730 PCIE (0000:00:10.0) NSID 1 from core 0: 18042.52 211.44 7093.39 5313.95 29798.73 00:07:27.730 PCIE (0000:00:11.0) NSID 1 from core 0: 18042.52 211.44 7082.05 5418.58 27885.27 00:07:27.730 PCIE (0000:00:12.0) NSID 1 from core 0: 18042.52 211.44 7071.08 5479.63 26177.66 00:07:27.730 PCIE (0000:00:12.0) NSID 2 from core 0: 18042.52 211.44 7060.21 5587.65 24553.24 00:07:27.730 PCIE (0000:00:12.0) NSID 3 from core 0: 18106.50 212.19 7024.45 5474.37 19386.54 00:07:27.730 ======================================================== 00:07:27.730 Total : 108319.07 1269.36 7072.58 5313.95 31282.30 00:07:27.730 00:07:27.730 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:27.730 ================================================================================= 00:07:27.730 1.00000% : 5923.446us 00:07:27.730 10.00000% : 6301.538us 00:07:27.730 25.00000% : 6503.188us 00:07:27.730 50.00000% : 6755.249us 00:07:27.730 75.00000% : 7108.135us 00:07:27.730 90.00000% : 7914.732us 00:07:27.730 95.00000% : 8922.978us 00:07:27.730 98.00000% : 10636.997us 00:07:27.730 99.00000% : 12300.603us 00:07:27.730 99.50000% : 25811.102us 00:07:27.730 99.90000% : 31053.982us 00:07:27.730 99.99000% : 31255.631us 00:07:27.730 99.99900% : 31457.280us 00:07:27.730 99.99990% : 31457.280us 00:07:27.730 99.99999% : 31457.280us 00:07:27.730 00:07:27.730 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:27.730 ================================================================================= 00:07:27.730 1.00000% : 5847.828us 00:07:27.730 10.00000% : 6251.126us 00:07:27.730 25.00000% : 6452.775us 00:07:27.730 50.00000% : 6755.249us 00:07:27.730 75.00000% : 7158.548us 00:07:27.730 90.00000% : 7914.732us 00:07:27.730 95.00000% : 8872.566us 00:07:27.730 98.00000% : 10939.471us 00:07:27.730 99.00000% : 11796.480us 00:07:27.730 99.50000% : 23996.258us 00:07:27.730 99.90000% : 29440.788us 00:07:27.730 99.99000% : 29844.086us 00:07:27.730 99.99900% : 29844.086us 00:07:27.730 99.99990% : 29844.086us 00:07:27.730 99.99999% : 29844.086us 00:07:27.730 00:07:27.730 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:27.730 ================================================================================= 00:07:27.730 1.00000% : 5923.446us 00:07:27.730 10.00000% : 6301.538us 00:07:27.730 25.00000% : 6503.188us 00:07:27.730 50.00000% : 6755.249us 00:07:27.730 75.00000% : 7108.135us 00:07:27.730 90.00000% : 7914.732us 00:07:27.730 95.00000% : 8822.154us 00:07:27.730 98.00000% : 11141.120us 00:07:27.730 99.00000% : 11645.243us 00:07:27.730 99.50000% : 22181.415us 00:07:27.730 99.90000% : 27625.945us 00:07:27.730 99.99000% : 28029.243us 00:07:27.730 99.99900% : 28029.243us 00:07:27.730 99.99990% : 28029.243us 00:07:27.730 99.99999% : 28029.243us 00:07:27.730 00:07:27.730 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:27.730 ================================================================================= 00:07:27.730 1.00000% : 5898.240us 00:07:27.730 10.00000% : 6301.538us 00:07:27.730 25.00000% : 6503.188us 00:07:27.730 50.00000% : 6704.837us 00:07:27.730 75.00000% : 7108.135us 00:07:27.730 90.00000% : 7914.732us 00:07:27.730 95.00000% : 8822.154us 00:07:27.730 98.00000% : 10838.646us 00:07:27.730 99.00000% : 12451.840us 00:07:27.730 99.50000% : 21173.169us 00:07:27.730 99.90000% : 25811.102us 00:07:27.730 99.99000% : 26214.400us 00:07:27.730 99.99900% : 26214.400us 00:07:27.730 99.99990% : 26214.400us 00:07:27.730 99.99999% : 26214.400us 00:07:27.730 00:07:27.730 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:27.730 ================================================================================= 00:07:27.730 1.00000% : 5898.240us 00:07:27.730 10.00000% : 6301.538us 00:07:27.730 25.00000% : 6503.188us 00:07:27.730 50.00000% : 6755.249us 00:07:27.730 75.00000% : 7108.135us 00:07:27.730 90.00000% : 7914.732us 00:07:27.730 95.00000% : 9023.803us 00:07:27.730 98.00000% : 10636.997us 00:07:27.730 99.00000% : 12502.252us 00:07:27.730 99.50000% : 19358.326us 00:07:27.730 99.90000% : 24097.083us 00:07:27.730 99.99000% : 24601.206us 00:07:27.730 99.99900% : 24601.206us 00:07:27.730 99.99990% : 24601.206us 00:07:27.730 99.99999% : 24601.206us 00:07:27.730 00:07:27.730 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:27.730 ================================================================================= 00:07:27.730 1.00000% : 5898.240us 00:07:27.730 10.00000% : 6301.538us 00:07:27.730 25.00000% : 6503.188us 00:07:27.730 50.00000% : 6755.249us 00:07:27.730 75.00000% : 7108.135us 00:07:27.730 90.00000% : 7965.145us 00:07:27.730 95.00000% : 9074.215us 00:07:27.730 98.00000% : 10485.760us 00:07:27.730 99.00000% : 12502.252us 00:07:27.730 99.50000% : 13913.797us 00:07:27.730 99.90000% : 18955.028us 00:07:27.730 99.99000% : 19459.151us 00:07:27.730 99.99900% : 19459.151us 00:07:27.730 99.99990% : 19459.151us 00:07:27.730 99.99999% : 19459.151us 00:07:27.730 00:07:27.730 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:27.730 ============================================================================== 00:07:27.731 Range in us Cumulative IO count 00:07:27.731 5520.148 - 5545.354: 0.0055% ( 1) 00:07:27.731 5545.354 - 5570.560: 0.0111% ( 1) 00:07:27.731 5570.560 - 5595.766: 0.0277% ( 3) 00:07:27.731 5595.766 - 5620.972: 0.0499% ( 4) 00:07:27.731 5620.972 - 5646.178: 0.0665% ( 3) 00:07:27.731 5646.178 - 5671.385: 0.0831% ( 3) 00:07:27.731 5671.385 - 5696.591: 0.1164% ( 6) 00:07:27.731 5696.591 - 5721.797: 0.1496% ( 6) 00:07:27.731 5721.797 - 5747.003: 0.1773% ( 5) 00:07:27.731 5747.003 - 5772.209: 0.2549% ( 14) 00:07:27.731 5772.209 - 5797.415: 0.4100% ( 28) 00:07:27.731 5797.415 - 5822.622: 0.4820% ( 13) 00:07:27.731 5822.622 - 5847.828: 0.5430% ( 11) 00:07:27.731 5847.828 - 5873.034: 0.7979% ( 46) 00:07:27.731 5873.034 - 5898.240: 0.9419% ( 26) 00:07:27.731 5898.240 - 5923.446: 1.2301% ( 52) 00:07:27.731 5923.446 - 5948.652: 1.3963% ( 30) 00:07:27.731 5948.652 - 5973.858: 1.6013% ( 37) 00:07:27.731 5973.858 - 5999.065: 1.8451% ( 44) 00:07:27.731 5999.065 - 6024.271: 2.2606% ( 75) 00:07:27.731 6024.271 - 6049.477: 2.7704% ( 92) 00:07:27.731 6049.477 - 6074.683: 3.1416% ( 67) 00:07:27.731 6074.683 - 6099.889: 3.8564% ( 129) 00:07:27.731 6099.889 - 6125.095: 4.2996% ( 80) 00:07:27.731 6125.095 - 6150.302: 5.2416% ( 170) 00:07:27.731 6150.302 - 6175.508: 6.2777% ( 187) 00:07:27.731 6175.508 - 6200.714: 6.8650% ( 106) 00:07:27.731 6200.714 - 6225.920: 7.5410% ( 122) 00:07:27.731 6225.920 - 6251.126: 8.7046% ( 210) 00:07:27.731 6251.126 - 6276.332: 9.5357% ( 150) 00:07:27.731 6276.332 - 6301.538: 10.6272% ( 197) 00:07:27.731 6301.538 - 6326.745: 11.9127% ( 232) 00:07:27.731 6326.745 - 6351.951: 13.0541% ( 206) 00:07:27.731 6351.951 - 6377.157: 14.6997% ( 297) 00:07:27.731 6377.157 - 6402.363: 16.5503% ( 334) 00:07:27.731 6402.363 - 6427.569: 18.9495% ( 433) 00:07:27.731 6427.569 - 6452.775: 21.4539% ( 452) 00:07:27.731 6452.775 - 6503.188: 25.5319% ( 736) 00:07:27.731 6503.188 - 6553.600: 31.1613% ( 1016) 00:07:27.731 6553.600 - 6604.012: 37.0124% ( 1056) 00:07:27.731 6604.012 - 6654.425: 42.2595% ( 947) 00:07:27.731 6654.425 - 6704.837: 48.6425% ( 1152) 00:07:27.731 6704.837 - 6755.249: 53.0419% ( 794) 00:07:27.731 6755.249 - 6805.662: 57.5853% ( 820) 00:07:27.731 6805.662 - 6856.074: 62.0900% ( 813) 00:07:27.731 6856.074 - 6906.486: 66.0018% ( 706) 00:07:27.731 6906.486 - 6956.898: 69.4758% ( 627) 00:07:27.731 6956.898 - 7007.311: 71.8528% ( 429) 00:07:27.731 7007.311 - 7057.723: 74.2686% ( 436) 00:07:27.731 7057.723 - 7108.135: 76.2799% ( 363) 00:07:27.731 7108.135 - 7158.548: 78.0142% ( 313) 00:07:27.731 7158.548 - 7208.960: 79.7207% ( 308) 00:07:27.731 7208.960 - 7259.372: 80.8123% ( 197) 00:07:27.731 7259.372 - 7309.785: 81.6046% ( 143) 00:07:27.731 7309.785 - 7360.197: 82.5133% ( 164) 00:07:27.731 7360.197 - 7410.609: 83.3278% ( 147) 00:07:27.731 7410.609 - 7461.022: 84.0703% ( 134) 00:07:27.731 7461.022 - 7511.434: 84.8958% ( 149) 00:07:27.731 7511.434 - 7561.846: 85.3834% ( 88) 00:07:27.731 7561.846 - 7612.258: 86.0206% ( 115) 00:07:27.731 7612.258 - 7662.671: 86.5027% ( 87) 00:07:27.731 7662.671 - 7713.083: 87.0567% ( 100) 00:07:27.731 7713.083 - 7763.495: 87.8047% ( 135) 00:07:27.731 7763.495 - 7813.908: 88.6525% ( 153) 00:07:27.731 7813.908 - 7864.320: 89.5889% ( 169) 00:07:27.731 7864.320 - 7914.732: 90.5253% ( 169) 00:07:27.731 7914.732 - 7965.145: 91.3121% ( 142) 00:07:27.731 7965.145 - 8015.557: 91.6777% ( 66) 00:07:27.731 8015.557 - 8065.969: 91.9548% ( 50) 00:07:27.731 8065.969 - 8116.382: 92.3593% ( 73) 00:07:27.731 8116.382 - 8166.794: 92.8469% ( 88) 00:07:27.731 8166.794 - 8217.206: 93.1571% ( 56) 00:07:27.731 8217.206 - 8267.618: 93.3954% ( 43) 00:07:27.731 8267.618 - 8318.031: 93.6503% ( 46) 00:07:27.731 8318.031 - 8368.443: 93.8165% ( 30) 00:07:27.731 8368.443 - 8418.855: 93.9273% ( 20) 00:07:27.731 8418.855 - 8469.268: 94.0824% ( 28) 00:07:27.731 8469.268 - 8519.680: 94.1988% ( 21) 00:07:27.731 8519.680 - 8570.092: 94.4537% ( 46) 00:07:27.731 8570.092 - 8620.505: 94.5479% ( 17) 00:07:27.731 8620.505 - 8670.917: 94.6365% ( 16) 00:07:27.731 8670.917 - 8721.329: 94.7086% ( 13) 00:07:27.731 8721.329 - 8771.742: 94.7750% ( 12) 00:07:27.731 8771.742 - 8822.154: 94.8526% ( 14) 00:07:27.731 8822.154 - 8872.566: 94.9246% ( 13) 00:07:27.731 8872.566 - 8922.978: 95.0022% ( 14) 00:07:27.731 8922.978 - 8973.391: 95.0687% ( 12) 00:07:27.731 8973.391 - 9023.803: 95.1352% ( 12) 00:07:27.731 9023.803 - 9074.215: 95.1906% ( 10) 00:07:27.731 9074.215 - 9124.628: 95.2626% ( 13) 00:07:27.731 9124.628 - 9175.040: 95.3457% ( 15) 00:07:27.731 9175.040 - 9225.452: 95.4510% ( 19) 00:07:27.731 9225.452 - 9275.865: 95.6117% ( 29) 00:07:27.731 9275.865 - 9326.277: 95.7336% ( 22) 00:07:27.731 9326.277 - 9376.689: 95.8056% ( 13) 00:07:27.731 9376.689 - 9427.102: 95.9109% ( 19) 00:07:27.731 9427.102 - 9477.514: 95.9663% ( 10) 00:07:27.731 9477.514 - 9527.926: 96.0051% ( 7) 00:07:27.731 9527.926 - 9578.338: 96.0605% ( 10) 00:07:27.731 9578.338 - 9628.751: 96.1159% ( 10) 00:07:27.731 9628.751 - 9679.163: 96.1824% ( 12) 00:07:27.731 9679.163 - 9729.575: 96.2434% ( 11) 00:07:27.731 9729.575 - 9779.988: 96.2988% ( 10) 00:07:27.731 9779.988 - 9830.400: 96.3486% ( 9) 00:07:27.731 9830.400 - 9880.812: 96.5259% ( 32) 00:07:27.731 9880.812 - 9931.225: 96.6589% ( 24) 00:07:27.731 9931.225 - 9981.637: 96.7309% ( 13) 00:07:27.731 9981.637 - 10032.049: 96.7974% ( 12) 00:07:27.731 10032.049 - 10082.462: 97.0745% ( 50) 00:07:27.731 10082.462 - 10132.874: 97.1797% ( 19) 00:07:27.731 10132.874 - 10183.286: 97.3127% ( 24) 00:07:27.731 10183.286 - 10233.698: 97.4623% ( 27) 00:07:27.731 10233.698 - 10284.111: 97.5177% ( 10) 00:07:27.731 10284.111 - 10334.523: 97.5676% ( 9) 00:07:27.731 10334.523 - 10384.935: 97.6119% ( 8) 00:07:27.731 10384.935 - 10435.348: 97.6673% ( 10) 00:07:27.731 10435.348 - 10485.760: 97.7283% ( 11) 00:07:27.731 10485.760 - 10536.172: 97.9555% ( 41) 00:07:27.731 10536.172 - 10586.585: 97.9942% ( 7) 00:07:27.731 10586.585 - 10636.997: 98.0275% ( 6) 00:07:27.731 10636.997 - 10687.409: 98.0718% ( 8) 00:07:27.731 10687.409 - 10737.822: 98.1328% ( 11) 00:07:27.731 10737.822 - 10788.234: 98.1549% ( 4) 00:07:27.731 10788.234 - 10838.646: 98.1992% ( 8) 00:07:27.731 10838.646 - 10889.058: 98.2657% ( 12) 00:07:27.731 10889.058 - 10939.471: 98.3433% ( 14) 00:07:27.731 10939.471 - 10989.883: 98.4320% ( 16) 00:07:27.731 10989.883 - 11040.295: 98.5262% ( 17) 00:07:27.731 11040.295 - 11090.708: 98.6758% ( 27) 00:07:27.731 11090.708 - 11141.120: 98.7201% ( 8) 00:07:27.731 11141.120 - 11191.532: 98.7367% ( 3) 00:07:27.731 11191.532 - 11241.945: 98.7644% ( 5) 00:07:27.731 11241.945 - 11292.357: 98.7866% ( 4) 00:07:27.731 11292.357 - 11342.769: 98.8143% ( 5) 00:07:27.731 11342.769 - 11393.182: 98.8420% ( 5) 00:07:27.731 11393.182 - 11443.594: 98.8531% ( 2) 00:07:27.731 11443.594 - 11494.006: 98.9085% ( 10) 00:07:27.731 11494.006 - 11544.418: 98.9362% ( 5) 00:07:27.731 12098.954 - 12149.366: 98.9417% ( 1) 00:07:27.731 12149.366 - 12199.778: 98.9639% ( 4) 00:07:27.731 12199.778 - 12250.191: 98.9805% ( 3) 00:07:27.731 12250.191 - 12300.603: 99.0027% ( 4) 00:07:27.731 12300.603 - 12351.015: 99.0193% ( 3) 00:07:27.731 12351.015 - 12401.428: 99.0414% ( 4) 00:07:27.731 12401.428 - 12451.840: 99.0581% ( 3) 00:07:27.731 12451.840 - 12502.252: 99.0747% ( 3) 00:07:27.731 12502.252 - 12552.665: 99.0858% ( 2) 00:07:27.731 12552.665 - 12603.077: 99.1024% ( 3) 00:07:27.731 12603.077 - 12653.489: 99.1190% ( 3) 00:07:27.731 12653.489 - 12703.902: 99.1356% ( 3) 00:07:27.731 12703.902 - 12754.314: 99.1523% ( 3) 00:07:27.731 12754.314 - 12804.726: 99.1689% ( 3) 00:07:27.731 12804.726 - 12855.138: 99.1855% ( 3) 00:07:27.731 12855.138 - 12905.551: 99.2021% ( 3) 00:07:27.731 12905.551 - 13006.375: 99.2409% ( 7) 00:07:27.731 13006.375 - 13107.200: 99.2797% ( 7) 00:07:27.731 13107.200 - 13208.025: 99.2908% ( 2) 00:07:27.731 24802.855 - 24903.680: 99.3074% ( 3) 00:07:27.731 24903.680 - 25004.505: 99.3296% ( 4) 00:07:27.731 25004.505 - 25105.329: 99.3517% ( 4) 00:07:27.731 25105.329 - 25206.154: 99.3794% ( 5) 00:07:27.731 25206.154 - 25306.978: 99.4016% ( 4) 00:07:27.731 25306.978 - 25407.803: 99.4293% ( 5) 00:07:27.731 25407.803 - 25508.628: 99.4459% ( 3) 00:07:27.731 25508.628 - 25609.452: 99.4736% ( 5) 00:07:27.731 25609.452 - 25710.277: 99.4958% ( 4) 00:07:27.731 25710.277 - 25811.102: 99.5180% ( 4) 00:07:27.731 25811.102 - 26012.751: 99.5567% ( 7) 00:07:27.731 26012.751 - 26214.400: 99.6066% ( 9) 00:07:27.731 26214.400 - 26416.049: 99.6454% ( 7) 00:07:27.731 29642.437 - 29844.086: 99.6565% ( 2) 00:07:27.731 29844.086 - 30045.735: 99.7008% ( 8) 00:07:27.731 30045.735 - 30247.385: 99.7507% ( 9) 00:07:27.731 30247.385 - 30449.034: 99.8005% ( 9) 00:07:27.731 30449.034 - 30650.683: 99.8504% ( 9) 00:07:27.731 30650.683 - 30852.332: 99.8947% ( 8) 00:07:27.731 30852.332 - 31053.982: 99.9446% ( 9) 00:07:27.731 31053.982 - 31255.631: 99.9945% ( 9) 00:07:27.731 31255.631 - 31457.280: 100.0000% ( 1) 00:07:27.731 00:07:27.731 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:27.731 ============================================================================== 00:07:27.731 Range in us Cumulative IO count 00:07:27.731 5293.292 - 5318.498: 0.0055% ( 1) 00:07:27.731 5318.498 - 5343.705: 0.0111% ( 1) 00:07:27.731 5343.705 - 5368.911: 0.0222% ( 2) 00:07:27.731 5368.911 - 5394.117: 0.0388% ( 3) 00:07:27.732 5394.117 - 5419.323: 0.0554% ( 3) 00:07:27.732 5419.323 - 5444.529: 0.0720% ( 3) 00:07:27.732 5444.529 - 5469.735: 0.0997% ( 5) 00:07:27.732 5469.735 - 5494.942: 0.1219% ( 4) 00:07:27.732 5494.942 - 5520.148: 0.1496% ( 5) 00:07:27.732 5520.148 - 5545.354: 0.1718% ( 4) 00:07:27.732 5545.354 - 5570.560: 0.2050% ( 6) 00:07:27.732 5570.560 - 5595.766: 0.2383% ( 6) 00:07:27.732 5595.766 - 5620.972: 0.2604% ( 4) 00:07:27.732 5620.972 - 5646.178: 0.3158% ( 10) 00:07:27.732 5646.178 - 5671.385: 0.3657% ( 9) 00:07:27.732 5671.385 - 5696.591: 0.4433% ( 14) 00:07:27.732 5696.591 - 5721.797: 0.5042% ( 11) 00:07:27.732 5721.797 - 5747.003: 0.5596% ( 10) 00:07:27.732 5747.003 - 5772.209: 0.6261% ( 12) 00:07:27.732 5772.209 - 5797.415: 0.7923% ( 30) 00:07:27.732 5797.415 - 5822.622: 0.9419% ( 27) 00:07:27.732 5822.622 - 5847.828: 1.1082% ( 30) 00:07:27.732 5847.828 - 5873.034: 1.2633% ( 28) 00:07:27.732 5873.034 - 5898.240: 1.5293% ( 48) 00:07:27.732 5898.240 - 5923.446: 1.8340% ( 55) 00:07:27.732 5923.446 - 5948.652: 2.1886% ( 64) 00:07:27.732 5948.652 - 5973.858: 2.5100% ( 58) 00:07:27.732 5973.858 - 5999.065: 2.8812% ( 67) 00:07:27.732 5999.065 - 6024.271: 3.2192% ( 61) 00:07:27.732 6024.271 - 6049.477: 3.7511% ( 96) 00:07:27.732 6049.477 - 6074.683: 4.2055% ( 82) 00:07:27.732 6074.683 - 6099.889: 4.8094% ( 109) 00:07:27.732 6099.889 - 6125.095: 5.4133% ( 109) 00:07:27.732 6125.095 - 6150.302: 6.0838% ( 121) 00:07:27.732 6150.302 - 6175.508: 6.9703% ( 160) 00:07:27.732 6175.508 - 6200.714: 7.8347% ( 156) 00:07:27.732 6200.714 - 6225.920: 8.8154% ( 177) 00:07:27.732 6225.920 - 6251.126: 10.1507% ( 241) 00:07:27.732 6251.126 - 6276.332: 11.2589% ( 200) 00:07:27.732 6276.332 - 6301.538: 12.7272% ( 265) 00:07:27.732 6301.538 - 6326.745: 14.7274% ( 361) 00:07:27.732 6326.745 - 6351.951: 17.2097% ( 448) 00:07:27.732 6351.951 - 6377.157: 19.8249% ( 472) 00:07:27.732 6377.157 - 6402.363: 22.2684% ( 441) 00:07:27.732 6402.363 - 6427.569: 24.1024% ( 331) 00:07:27.732 6427.569 - 6452.775: 26.1580% ( 371) 00:07:27.732 6452.775 - 6503.188: 30.8732% ( 851) 00:07:27.732 6503.188 - 6553.600: 36.1370% ( 950) 00:07:27.732 6553.600 - 6604.012: 40.5253% ( 792) 00:07:27.732 6604.012 - 6654.425: 44.5922% ( 734) 00:07:27.732 6654.425 - 6704.837: 48.5705% ( 718) 00:07:27.732 6704.837 - 6755.249: 52.3382% ( 680) 00:07:27.732 6755.249 - 6805.662: 55.6184% ( 592) 00:07:27.732 6805.662 - 6856.074: 59.2420% ( 654) 00:07:27.732 6856.074 - 6906.486: 62.8934% ( 659) 00:07:27.732 6906.486 - 6956.898: 65.6527% ( 498) 00:07:27.732 6956.898 - 7007.311: 68.1350% ( 448) 00:07:27.732 7007.311 - 7057.723: 70.7558% ( 473) 00:07:27.732 7057.723 - 7108.135: 73.1272% ( 428) 00:07:27.732 7108.135 - 7158.548: 75.1330% ( 362) 00:07:27.732 7158.548 - 7208.960: 77.0002% ( 337) 00:07:27.732 7208.960 - 7259.372: 78.6237% ( 293) 00:07:27.732 7259.372 - 7309.785: 79.8648% ( 224) 00:07:27.732 7309.785 - 7360.197: 81.0616% ( 216) 00:07:27.732 7360.197 - 7410.609: 82.1476% ( 196) 00:07:27.732 7410.609 - 7461.022: 83.1449% ( 180) 00:07:27.732 7461.022 - 7511.434: 84.0259% ( 159) 00:07:27.732 7511.434 - 7561.846: 84.8293% ( 145) 00:07:27.732 7561.846 - 7612.258: 85.9541% ( 203) 00:07:27.732 7612.258 - 7662.671: 87.0955% ( 206) 00:07:27.732 7662.671 - 7713.083: 87.9654% ( 157) 00:07:27.732 7713.083 - 7763.495: 88.5805% ( 111) 00:07:27.732 7763.495 - 7813.908: 89.2509% ( 121) 00:07:27.732 7813.908 - 7864.320: 89.8548% ( 109) 00:07:27.732 7864.320 - 7914.732: 90.2981% ( 80) 00:07:27.732 7914.732 - 7965.145: 90.6693% ( 67) 00:07:27.732 7965.145 - 8015.557: 91.0018% ( 60) 00:07:27.732 8015.557 - 8065.969: 91.3508% ( 63) 00:07:27.732 8065.969 - 8116.382: 91.6777% ( 59) 00:07:27.732 8116.382 - 8166.794: 92.0988% ( 76) 00:07:27.732 8166.794 - 8217.206: 92.4313% ( 60) 00:07:27.732 8217.206 - 8267.618: 92.7582% ( 59) 00:07:27.732 8267.618 - 8318.031: 93.1294% ( 67) 00:07:27.732 8318.031 - 8368.443: 93.4619% ( 60) 00:07:27.732 8368.443 - 8418.855: 93.7555% ( 53) 00:07:27.732 8418.855 - 8469.268: 94.0215% ( 48) 00:07:27.732 8469.268 - 8519.680: 94.2376% ( 39) 00:07:27.732 8519.680 - 8570.092: 94.4260% ( 34) 00:07:27.732 8570.092 - 8620.505: 94.5756% ( 27) 00:07:27.732 8620.505 - 8670.917: 94.7141% ( 25) 00:07:27.732 8670.917 - 8721.329: 94.7972% ( 15) 00:07:27.732 8721.329 - 8771.742: 94.8859% ( 16) 00:07:27.732 8771.742 - 8822.154: 94.9745% ( 16) 00:07:27.732 8822.154 - 8872.566: 95.0742% ( 18) 00:07:27.732 8872.566 - 8922.978: 95.1352% ( 11) 00:07:27.732 8922.978 - 8973.391: 95.2516% ( 21) 00:07:27.732 8973.391 - 9023.803: 95.3180% ( 12) 00:07:27.732 9023.803 - 9074.215: 95.3901% ( 13) 00:07:27.732 9074.215 - 9124.628: 95.4621% ( 13) 00:07:27.732 9124.628 - 9175.040: 95.5618% ( 18) 00:07:27.732 9175.040 - 9225.452: 95.6560% ( 17) 00:07:27.732 9225.452 - 9275.865: 95.7558% ( 18) 00:07:27.732 9275.865 - 9326.277: 95.8721% ( 21) 00:07:27.732 9326.277 - 9376.689: 95.9552% ( 15) 00:07:27.732 9376.689 - 9427.102: 96.0162% ( 11) 00:07:27.732 9427.102 - 9477.514: 96.0771% ( 11) 00:07:27.732 9477.514 - 9527.926: 96.1381% ( 11) 00:07:27.732 9527.926 - 9578.338: 96.1935% ( 10) 00:07:27.732 9578.338 - 9628.751: 96.2323% ( 7) 00:07:27.732 9628.751 - 9679.163: 96.2655% ( 6) 00:07:27.732 9679.163 - 9729.575: 96.3043% ( 7) 00:07:27.732 9729.575 - 9779.988: 96.3708% ( 12) 00:07:27.732 9779.988 - 9830.400: 96.4484% ( 14) 00:07:27.732 9830.400 - 9880.812: 96.5813% ( 24) 00:07:27.732 9880.812 - 9931.225: 96.6700% ( 16) 00:07:27.732 9931.225 - 9981.637: 96.7697% ( 18) 00:07:27.732 9981.637 - 10032.049: 96.8418% ( 13) 00:07:27.732 10032.049 - 10082.462: 96.8750% ( 6) 00:07:27.732 10082.462 - 10132.874: 96.9193% ( 8) 00:07:27.732 10132.874 - 10183.286: 96.9637% ( 8) 00:07:27.732 10183.286 - 10233.698: 96.9914% ( 5) 00:07:27.732 10233.698 - 10284.111: 97.0246% ( 6) 00:07:27.732 10284.111 - 10334.523: 97.0911% ( 12) 00:07:27.732 10334.523 - 10384.935: 97.1299% ( 7) 00:07:27.732 10384.935 - 10435.348: 97.1964% ( 12) 00:07:27.732 10435.348 - 10485.760: 97.2518% ( 10) 00:07:27.732 10485.760 - 10536.172: 97.2906% ( 7) 00:07:27.732 10536.172 - 10586.585: 97.3404% ( 9) 00:07:27.732 10586.585 - 10636.997: 97.5233% ( 33) 00:07:27.732 10636.997 - 10687.409: 97.6507% ( 23) 00:07:27.732 10687.409 - 10737.822: 97.7338% ( 15) 00:07:27.732 10737.822 - 10788.234: 97.7781% ( 8) 00:07:27.732 10788.234 - 10838.646: 97.8668% ( 16) 00:07:27.732 10838.646 - 10889.058: 97.9721% ( 19) 00:07:27.732 10889.058 - 10939.471: 98.1272% ( 28) 00:07:27.732 10939.471 - 10989.883: 98.2048% ( 14) 00:07:27.732 10989.883 - 11040.295: 98.2713% ( 12) 00:07:27.732 11040.295 - 11090.708: 98.4098% ( 25) 00:07:27.732 11090.708 - 11141.120: 98.4430% ( 6) 00:07:27.732 11141.120 - 11191.532: 98.4984% ( 10) 00:07:27.732 11191.532 - 11241.945: 98.5649% ( 12) 00:07:27.732 11241.945 - 11292.357: 98.5926% ( 5) 00:07:27.732 11292.357 - 11342.769: 98.6370% ( 8) 00:07:27.732 11342.769 - 11393.182: 98.6868% ( 9) 00:07:27.732 11393.182 - 11443.594: 98.7367% ( 9) 00:07:27.732 11443.594 - 11494.006: 98.7866% ( 9) 00:07:27.732 11494.006 - 11544.418: 98.8087% ( 4) 00:07:27.732 11544.418 - 11594.831: 98.8420% ( 6) 00:07:27.732 11594.831 - 11645.243: 98.8863% ( 8) 00:07:27.732 11645.243 - 11695.655: 98.9362% ( 9) 00:07:27.732 11695.655 - 11746.068: 98.9750% ( 7) 00:07:27.732 11746.068 - 11796.480: 99.0027% ( 5) 00:07:27.732 11796.480 - 11846.892: 99.0193% ( 3) 00:07:27.732 11846.892 - 11897.305: 99.0359% ( 3) 00:07:27.732 11897.305 - 11947.717: 99.0470% ( 2) 00:07:27.732 11947.717 - 11998.129: 99.0636% ( 3) 00:07:27.732 11998.129 - 12048.542: 99.0747% ( 2) 00:07:27.732 12048.542 - 12098.954: 99.0858% ( 2) 00:07:27.732 12098.954 - 12149.366: 99.1024% ( 3) 00:07:27.732 12149.366 - 12199.778: 99.1135% ( 2) 00:07:27.732 12199.778 - 12250.191: 99.1301% ( 3) 00:07:27.732 12250.191 - 12300.603: 99.1467% ( 3) 00:07:27.732 12300.603 - 12351.015: 99.1578% ( 2) 00:07:27.732 12351.015 - 12401.428: 99.1800% ( 4) 00:07:27.732 12401.428 - 12451.840: 99.1910% ( 2) 00:07:27.732 12451.840 - 12502.252: 99.2132% ( 4) 00:07:27.732 12502.252 - 12552.665: 99.2243% ( 2) 00:07:27.732 12552.665 - 12603.077: 99.2409% ( 3) 00:07:27.732 12603.077 - 12653.489: 99.2575% ( 3) 00:07:27.732 12653.489 - 12703.902: 99.2686% ( 2) 00:07:27.732 12703.902 - 12754.314: 99.2908% ( 4) 00:07:27.732 22988.012 - 23088.837: 99.3129% ( 4) 00:07:27.732 23088.837 - 23189.662: 99.3406% ( 5) 00:07:27.732 23189.662 - 23290.486: 99.3573% ( 3) 00:07:27.732 23290.486 - 23391.311: 99.3794% ( 4) 00:07:27.732 23391.311 - 23492.135: 99.3961% ( 3) 00:07:27.732 23492.135 - 23592.960: 99.4182% ( 4) 00:07:27.732 23592.960 - 23693.785: 99.4404% ( 4) 00:07:27.732 23693.785 - 23794.609: 99.4570% ( 3) 00:07:27.732 23794.609 - 23895.434: 99.4792% ( 4) 00:07:27.732 23895.434 - 23996.258: 99.5013% ( 4) 00:07:27.732 23996.258 - 24097.083: 99.5235% ( 4) 00:07:27.732 24097.083 - 24197.908: 99.5457% ( 4) 00:07:27.732 24197.908 - 24298.732: 99.5623% ( 3) 00:07:27.732 24298.732 - 24399.557: 99.5844% ( 4) 00:07:27.732 24399.557 - 24500.382: 99.6011% ( 3) 00:07:27.732 24500.382 - 24601.206: 99.6288% ( 5) 00:07:27.732 24601.206 - 24702.031: 99.6454% ( 3) 00:07:27.732 28029.243 - 28230.892: 99.6786% ( 6) 00:07:27.732 28230.892 - 28432.542: 99.7285% ( 9) 00:07:27.732 28432.542 - 28634.191: 99.7617% ( 6) 00:07:27.732 28634.191 - 28835.840: 99.8061% ( 8) 00:07:27.733 28835.840 - 29037.489: 99.8449% ( 7) 00:07:27.733 29037.489 - 29239.138: 99.8836% ( 7) 00:07:27.733 29239.138 - 29440.788: 99.9280% ( 8) 00:07:27.733 29440.788 - 29642.437: 99.9723% ( 8) 00:07:27.733 29642.437 - 29844.086: 100.0000% ( 5) 00:07:27.733 00:07:27.733 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:27.733 ============================================================================== 00:07:27.733 Range in us Cumulative IO count 00:07:27.733 5394.117 - 5419.323: 0.0055% ( 1) 00:07:27.733 5545.354 - 5570.560: 0.0111% ( 1) 00:07:27.733 5595.766 - 5620.972: 0.0166% ( 1) 00:07:27.733 5620.972 - 5646.178: 0.0388% ( 4) 00:07:27.733 5646.178 - 5671.385: 0.0443% ( 1) 00:07:27.733 5671.385 - 5696.591: 0.0665% ( 4) 00:07:27.733 5696.591 - 5721.797: 0.0942% ( 5) 00:07:27.733 5721.797 - 5747.003: 0.1496% ( 10) 00:07:27.733 5747.003 - 5772.209: 0.1995% ( 9) 00:07:27.733 5772.209 - 5797.415: 0.2493% ( 9) 00:07:27.733 5797.415 - 5822.622: 0.4322% ( 33) 00:07:27.733 5822.622 - 5847.828: 0.6372% ( 37) 00:07:27.733 5847.828 - 5873.034: 0.7702% ( 24) 00:07:27.733 5873.034 - 5898.240: 0.9142% ( 26) 00:07:27.733 5898.240 - 5923.446: 1.0638% ( 27) 00:07:27.733 5923.446 - 5948.652: 1.2910% ( 41) 00:07:27.733 5948.652 - 5973.858: 1.5016% ( 38) 00:07:27.733 5973.858 - 5999.065: 1.7176% ( 39) 00:07:27.733 5999.065 - 6024.271: 2.3160% ( 108) 00:07:27.733 6024.271 - 6049.477: 2.8590% ( 98) 00:07:27.733 6049.477 - 6074.683: 3.5073% ( 117) 00:07:27.733 6074.683 - 6099.889: 4.0725% ( 102) 00:07:27.733 6099.889 - 6125.095: 4.5545% ( 87) 00:07:27.733 6125.095 - 6150.302: 5.0809% ( 95) 00:07:27.733 6150.302 - 6175.508: 5.7790% ( 126) 00:07:27.733 6175.508 - 6200.714: 6.8872% ( 200) 00:07:27.733 6200.714 - 6225.920: 7.4523% ( 102) 00:07:27.733 6225.920 - 6251.126: 8.1505% ( 126) 00:07:27.733 6251.126 - 6276.332: 8.9650% ( 147) 00:07:27.733 6276.332 - 6301.538: 10.1285% ( 210) 00:07:27.733 6301.538 - 6326.745: 11.2533% ( 203) 00:07:27.733 6326.745 - 6351.951: 12.9211% ( 301) 00:07:27.733 6351.951 - 6377.157: 14.5723% ( 298) 00:07:27.733 6377.157 - 6402.363: 16.7775% ( 398) 00:07:27.733 6402.363 - 6427.569: 19.0104% ( 403) 00:07:27.733 6427.569 - 6452.775: 21.1824% ( 392) 00:07:27.733 6452.775 - 6503.188: 26.3575% ( 934) 00:07:27.733 6503.188 - 6553.600: 31.8152% ( 985) 00:07:27.733 6553.600 - 6604.012: 37.9765% ( 1112) 00:07:27.733 6604.012 - 6654.425: 43.8774% ( 1065) 00:07:27.733 6654.425 - 6704.837: 49.0691% ( 937) 00:07:27.733 6704.837 - 6755.249: 53.7899% ( 852) 00:07:27.733 6755.249 - 6805.662: 58.8819% ( 919) 00:07:27.733 6805.662 - 6856.074: 62.9987% ( 743) 00:07:27.733 6856.074 - 6906.486: 66.5503% ( 641) 00:07:27.733 6906.486 - 6956.898: 70.0078% ( 624) 00:07:27.733 6956.898 - 7007.311: 72.5898% ( 466) 00:07:27.733 7007.311 - 7057.723: 74.5623% ( 356) 00:07:27.733 7057.723 - 7108.135: 75.9973% ( 259) 00:07:27.733 7108.135 - 7158.548: 77.5044% ( 272) 00:07:27.733 7158.548 - 7208.960: 78.9450% ( 260) 00:07:27.733 7208.960 - 7259.372: 79.7595% ( 147) 00:07:27.733 7259.372 - 7309.785: 80.3967% ( 115) 00:07:27.733 7309.785 - 7360.197: 81.4051% ( 182) 00:07:27.733 7360.197 - 7410.609: 82.2252% ( 148) 00:07:27.733 7410.609 - 7461.022: 82.9566% ( 132) 00:07:27.733 7461.022 - 7511.434: 83.7711% ( 147) 00:07:27.733 7511.434 - 7561.846: 84.5468% ( 140) 00:07:27.733 7561.846 - 7612.258: 85.4222% ( 158) 00:07:27.733 7612.258 - 7662.671: 86.3475% ( 167) 00:07:27.733 7662.671 - 7713.083: 87.1177% ( 139) 00:07:27.733 7713.083 - 7763.495: 87.9156% ( 144) 00:07:27.733 7763.495 - 7813.908: 88.7633% ( 153) 00:07:27.733 7813.908 - 7864.320: 89.6554% ( 161) 00:07:27.733 7864.320 - 7914.732: 90.3036% ( 117) 00:07:27.733 7914.732 - 7965.145: 90.8633% ( 101) 00:07:27.733 7965.145 - 8015.557: 91.7110% ( 153) 00:07:27.733 8015.557 - 8065.969: 92.1543% ( 80) 00:07:27.733 8065.969 - 8116.382: 92.4479% ( 53) 00:07:27.733 8116.382 - 8166.794: 92.7083% ( 47) 00:07:27.733 8166.794 - 8217.206: 93.0740% ( 66) 00:07:27.733 8217.206 - 8267.618: 93.2347% ( 29) 00:07:27.733 8267.618 - 8318.031: 93.4065% ( 31) 00:07:27.733 8318.031 - 8368.443: 93.6503% ( 44) 00:07:27.733 8368.443 - 8418.855: 93.9051% ( 46) 00:07:27.733 8418.855 - 8469.268: 94.2819% ( 68) 00:07:27.733 8469.268 - 8519.680: 94.4094% ( 23) 00:07:27.733 8519.680 - 8570.092: 94.4925% ( 15) 00:07:27.733 8570.092 - 8620.505: 94.5756% ( 15) 00:07:27.733 8620.505 - 8670.917: 94.6642% ( 16) 00:07:27.733 8670.917 - 8721.329: 94.8027% ( 25) 00:07:27.733 8721.329 - 8771.742: 94.9468% ( 26) 00:07:27.733 8771.742 - 8822.154: 95.1020% ( 28) 00:07:27.733 8822.154 - 8872.566: 95.1684% ( 12) 00:07:27.733 8872.566 - 8922.978: 95.2460% ( 14) 00:07:27.733 8922.978 - 8973.391: 95.3125% ( 12) 00:07:27.733 8973.391 - 9023.803: 95.3734% ( 11) 00:07:27.733 9023.803 - 9074.215: 95.4178% ( 8) 00:07:27.733 9074.215 - 9124.628: 95.4676% ( 9) 00:07:27.733 9124.628 - 9175.040: 95.5064% ( 7) 00:07:27.733 9175.040 - 9225.452: 95.5563% ( 9) 00:07:27.733 9225.452 - 9275.865: 95.6117% ( 10) 00:07:27.733 9275.865 - 9326.277: 95.7391% ( 23) 00:07:27.733 9326.277 - 9376.689: 95.8555% ( 21) 00:07:27.733 9376.689 - 9427.102: 95.9663% ( 20) 00:07:27.733 9427.102 - 9477.514: 96.0162% ( 9) 00:07:27.733 9477.514 - 9527.926: 96.0827% ( 12) 00:07:27.733 9527.926 - 9578.338: 96.1990% ( 21) 00:07:27.733 9578.338 - 9628.751: 96.2434% ( 8) 00:07:27.733 9628.751 - 9679.163: 96.2821% ( 7) 00:07:27.733 9679.163 - 9729.575: 96.3043% ( 4) 00:07:27.733 9729.575 - 9779.988: 96.3486% ( 8) 00:07:27.733 9779.988 - 9830.400: 96.4040% ( 10) 00:07:27.733 9830.400 - 9880.812: 96.4761% ( 13) 00:07:27.733 9880.812 - 9931.225: 96.5924% ( 21) 00:07:27.733 9931.225 - 9981.637: 96.7697% ( 32) 00:07:27.733 9981.637 - 10032.049: 96.8085% ( 7) 00:07:27.733 10032.049 - 10082.462: 96.8805% ( 13) 00:07:27.733 10082.462 - 10132.874: 96.9526% ( 13) 00:07:27.733 10132.874 - 10183.286: 97.1133% ( 29) 00:07:27.733 10183.286 - 10233.698: 97.2795% ( 30) 00:07:27.733 10233.698 - 10284.111: 97.3958% ( 21) 00:07:27.733 10284.111 - 10334.523: 97.4346% ( 7) 00:07:27.733 10334.523 - 10384.935: 97.4789% ( 8) 00:07:27.733 10384.935 - 10435.348: 97.5122% ( 6) 00:07:27.733 10435.348 - 10485.760: 97.5454% ( 6) 00:07:27.733 10485.760 - 10536.172: 97.5676% ( 4) 00:07:27.733 10536.172 - 10586.585: 97.5953% ( 5) 00:07:27.733 10586.585 - 10636.997: 97.6175% ( 4) 00:07:27.733 10636.997 - 10687.409: 97.6396% ( 4) 00:07:27.733 10687.409 - 10737.822: 97.6673% ( 5) 00:07:27.733 10737.822 - 10788.234: 97.6840% ( 3) 00:07:27.733 10788.234 - 10838.646: 97.7117% ( 5) 00:07:27.733 10838.646 - 10889.058: 97.7394% ( 5) 00:07:27.733 10889.058 - 10939.471: 97.7671% ( 5) 00:07:27.733 10939.471 - 10989.883: 97.8059% ( 7) 00:07:27.733 10989.883 - 11040.295: 97.8502% ( 8) 00:07:27.733 11040.295 - 11090.708: 97.9832% ( 24) 00:07:27.733 11090.708 - 11141.120: 98.1605% ( 32) 00:07:27.733 11141.120 - 11191.532: 98.2380% ( 14) 00:07:27.733 11191.532 - 11241.945: 98.3433% ( 19) 00:07:27.733 11241.945 - 11292.357: 98.4707% ( 23) 00:07:27.733 11292.357 - 11342.769: 98.5428% ( 13) 00:07:27.733 11342.769 - 11393.182: 98.6370% ( 17) 00:07:27.733 11393.182 - 11443.594: 98.7256% ( 16) 00:07:27.733 11443.594 - 11494.006: 98.9140% ( 34) 00:07:27.733 11494.006 - 11544.418: 98.9528% ( 7) 00:07:27.733 11544.418 - 11594.831: 98.9860% ( 6) 00:07:27.733 11594.831 - 11645.243: 99.0248% ( 7) 00:07:27.733 11645.243 - 11695.655: 99.0581% ( 6) 00:07:27.733 11695.655 - 11746.068: 99.0802% ( 4) 00:07:27.733 11746.068 - 11796.480: 99.0969% ( 3) 00:07:27.733 11796.480 - 11846.892: 99.1190% ( 4) 00:07:27.733 11846.892 - 11897.305: 99.1356% ( 3) 00:07:27.733 11897.305 - 11947.717: 99.1523% ( 3) 00:07:27.733 11947.717 - 11998.129: 99.1689% ( 3) 00:07:27.733 11998.129 - 12048.542: 99.1855% ( 3) 00:07:27.733 12048.542 - 12098.954: 99.2021% ( 3) 00:07:27.733 12098.954 - 12149.366: 99.2188% ( 3) 00:07:27.733 12149.366 - 12199.778: 99.2354% ( 3) 00:07:27.733 12199.778 - 12250.191: 99.2520% ( 3) 00:07:27.733 12250.191 - 12300.603: 99.2686% ( 3) 00:07:27.733 12300.603 - 12351.015: 99.2852% ( 3) 00:07:27.733 12351.015 - 12401.428: 99.2908% ( 1) 00:07:27.733 21173.169 - 21273.994: 99.3019% ( 2) 00:07:27.733 21273.994 - 21374.818: 99.3240% ( 4) 00:07:27.733 21374.818 - 21475.643: 99.3462% ( 4) 00:07:27.733 21475.643 - 21576.468: 99.3684% ( 4) 00:07:27.733 21576.468 - 21677.292: 99.3905% ( 4) 00:07:27.733 21677.292 - 21778.117: 99.4127% ( 4) 00:07:27.733 21778.117 - 21878.942: 99.4348% ( 4) 00:07:27.733 21878.942 - 21979.766: 99.4570% ( 4) 00:07:27.733 21979.766 - 22080.591: 99.4792% ( 4) 00:07:27.733 22080.591 - 22181.415: 99.5013% ( 4) 00:07:27.733 22181.415 - 22282.240: 99.5235% ( 4) 00:07:27.733 22282.240 - 22383.065: 99.5457% ( 4) 00:07:27.733 22383.065 - 22483.889: 99.5678% ( 4) 00:07:27.733 22483.889 - 22584.714: 99.5900% ( 4) 00:07:27.733 22584.714 - 22685.538: 99.6121% ( 4) 00:07:27.733 22685.538 - 22786.363: 99.6398% ( 5) 00:07:27.733 22786.363 - 22887.188: 99.6454% ( 1) 00:07:27.733 26214.400 - 26416.049: 99.6786% ( 6) 00:07:27.733 26416.049 - 26617.698: 99.7230% ( 8) 00:07:27.733 26617.698 - 26819.348: 99.7617% ( 7) 00:07:27.733 26819.348 - 27020.997: 99.8061% ( 8) 00:07:27.733 27020.997 - 27222.646: 99.8504% ( 8) 00:07:27.733 27222.646 - 27424.295: 99.8947% ( 8) 00:07:27.733 27424.295 - 27625.945: 99.9391% ( 8) 00:07:27.733 27625.945 - 27827.594: 99.9834% ( 8) 00:07:27.733 27827.594 - 28029.243: 100.0000% ( 3) 00:07:27.734 00:07:27.734 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:27.734 ============================================================================== 00:07:27.734 Range in us Cumulative IO count 00:07:27.734 5469.735 - 5494.942: 0.0055% ( 1) 00:07:27.734 5545.354 - 5570.560: 0.0166% ( 2) 00:07:27.734 5570.560 - 5595.766: 0.0388% ( 4) 00:07:27.734 5595.766 - 5620.972: 0.0609% ( 4) 00:07:27.734 5620.972 - 5646.178: 0.0887% ( 5) 00:07:27.734 5646.178 - 5671.385: 0.1219% ( 6) 00:07:27.734 5671.385 - 5696.591: 0.1330% ( 2) 00:07:27.734 5696.591 - 5721.797: 0.1773% ( 8) 00:07:27.734 5721.797 - 5747.003: 0.2327% ( 10) 00:07:27.734 5747.003 - 5772.209: 0.3047% ( 13) 00:07:27.734 5772.209 - 5797.415: 0.6095% ( 55) 00:07:27.734 5797.415 - 5822.622: 0.7148% ( 19) 00:07:27.734 5822.622 - 5847.828: 0.8145% ( 18) 00:07:27.734 5847.828 - 5873.034: 0.9253% ( 20) 00:07:27.734 5873.034 - 5898.240: 1.0527% ( 23) 00:07:27.734 5898.240 - 5923.446: 1.3076% ( 46) 00:07:27.734 5923.446 - 5948.652: 1.4572% ( 27) 00:07:27.734 5948.652 - 5973.858: 1.7620% ( 55) 00:07:27.734 5973.858 - 5999.065: 2.0723% ( 56) 00:07:27.734 5999.065 - 6024.271: 2.2939% ( 40) 00:07:27.734 6024.271 - 6049.477: 2.7815% ( 88) 00:07:27.734 6049.477 - 6074.683: 3.5018% ( 130) 00:07:27.734 6074.683 - 6099.889: 4.0614% ( 101) 00:07:27.734 6099.889 - 6125.095: 4.5379% ( 86) 00:07:27.734 6125.095 - 6150.302: 4.8814% ( 62) 00:07:27.734 6150.302 - 6175.508: 5.5020% ( 112) 00:07:27.734 6175.508 - 6200.714: 6.6268% ( 203) 00:07:27.734 6200.714 - 6225.920: 7.5244% ( 162) 00:07:27.734 6225.920 - 6251.126: 8.3278% ( 145) 00:07:27.734 6251.126 - 6276.332: 9.0703% ( 134) 00:07:27.734 6276.332 - 6301.538: 10.0122% ( 170) 00:07:27.734 6301.538 - 6326.745: 11.1259% ( 201) 00:07:27.734 6326.745 - 6351.951: 12.6164% ( 269) 00:07:27.734 6351.951 - 6377.157: 14.3728% ( 317) 00:07:27.734 6377.157 - 6402.363: 16.2733% ( 343) 00:07:27.734 6402.363 - 6427.569: 18.5284% ( 407) 00:07:27.734 6427.569 - 6452.775: 20.6172% ( 377) 00:07:27.734 6452.775 - 6503.188: 25.8976% ( 953) 00:07:27.734 6503.188 - 6553.600: 32.0922% ( 1118) 00:07:27.734 6553.600 - 6604.012: 37.2617% ( 933) 00:07:27.734 6604.012 - 6654.425: 43.3677% ( 1102) 00:07:27.734 6654.425 - 6704.837: 50.0554% ( 1207) 00:07:27.734 6704.837 - 6755.249: 55.1640% ( 922) 00:07:27.734 6755.249 - 6805.662: 59.4526% ( 774) 00:07:27.734 6805.662 - 6856.074: 63.3311% ( 700) 00:07:27.734 6856.074 - 6906.486: 66.1680% ( 512) 00:07:27.734 6906.486 - 6956.898: 69.5312% ( 607) 00:07:27.734 6956.898 - 7007.311: 72.2684% ( 494) 00:07:27.734 7007.311 - 7057.723: 74.5013% ( 403) 00:07:27.734 7057.723 - 7108.135: 76.1303% ( 294) 00:07:27.734 7108.135 - 7158.548: 77.5931% ( 264) 00:07:27.734 7158.548 - 7208.960: 79.1113% ( 274) 00:07:27.734 7208.960 - 7259.372: 79.9313% ( 148) 00:07:27.734 7259.372 - 7309.785: 80.7846% ( 154) 00:07:27.734 7309.785 - 7360.197: 82.0202% ( 223) 00:07:27.734 7360.197 - 7410.609: 83.0009% ( 177) 00:07:27.734 7410.609 - 7461.022: 83.5051% ( 91) 00:07:27.734 7461.022 - 7511.434: 84.1589% ( 118) 00:07:27.734 7511.434 - 7561.846: 84.8127% ( 118) 00:07:27.734 7561.846 - 7612.258: 85.6051% ( 143) 00:07:27.734 7612.258 - 7662.671: 86.1979% ( 107) 00:07:27.734 7662.671 - 7713.083: 86.8406% ( 116) 00:07:27.734 7713.083 - 7763.495: 87.4224% ( 105) 00:07:27.734 7763.495 - 7813.908: 88.1206% ( 126) 00:07:27.734 7813.908 - 7864.320: 89.4559% ( 241) 00:07:27.734 7864.320 - 7914.732: 90.2926% ( 151) 00:07:27.734 7914.732 - 7965.145: 91.0406% ( 135) 00:07:27.734 7965.145 - 8015.557: 91.5780% ( 97) 00:07:27.734 8015.557 - 8065.969: 91.9880% ( 74) 00:07:27.734 8065.969 - 8116.382: 92.3039% ( 57) 00:07:27.734 8116.382 - 8166.794: 92.5643% ( 47) 00:07:27.734 8166.794 - 8217.206: 92.7637% ( 36) 00:07:27.734 8217.206 - 8267.618: 92.9743% ( 38) 00:07:27.734 8267.618 - 8318.031: 93.3067% ( 60) 00:07:27.734 8318.031 - 8368.443: 93.4951% ( 34) 00:07:27.734 8368.443 - 8418.855: 93.7223% ( 41) 00:07:27.734 8418.855 - 8469.268: 93.9051% ( 33) 00:07:27.734 8469.268 - 8519.680: 94.2154% ( 56) 00:07:27.734 8519.680 - 8570.092: 94.3373% ( 22) 00:07:27.734 8570.092 - 8620.505: 94.4426% ( 19) 00:07:27.734 8620.505 - 8670.917: 94.5977% ( 28) 00:07:27.734 8670.917 - 8721.329: 94.8305% ( 42) 00:07:27.734 8721.329 - 8771.742: 94.9579% ( 23) 00:07:27.734 8771.742 - 8822.154: 95.1463% ( 34) 00:07:27.734 8822.154 - 8872.566: 95.2294% ( 15) 00:07:27.734 8872.566 - 8922.978: 95.2903% ( 11) 00:07:27.734 8922.978 - 8973.391: 95.3457% ( 10) 00:07:27.734 8973.391 - 9023.803: 95.3901% ( 8) 00:07:27.734 9023.803 - 9074.215: 95.4732% ( 15) 00:07:27.734 9074.215 - 9124.628: 95.6394% ( 30) 00:07:27.734 9124.628 - 9175.040: 95.6948% ( 10) 00:07:27.734 9175.040 - 9225.452: 95.7558% ( 11) 00:07:27.734 9225.452 - 9275.865: 95.8278% ( 13) 00:07:27.734 9275.865 - 9326.277: 95.8832% ( 10) 00:07:27.734 9326.277 - 9376.689: 95.9164% ( 6) 00:07:27.734 9376.689 - 9427.102: 95.9608% ( 8) 00:07:27.734 9427.102 - 9477.514: 96.1104% ( 27) 00:07:27.734 9477.514 - 9527.926: 96.1990% ( 16) 00:07:27.734 9527.926 - 9578.338: 96.2544% ( 10) 00:07:27.734 9578.338 - 9628.751: 96.3043% ( 9) 00:07:27.734 9628.751 - 9679.163: 96.3542% ( 9) 00:07:27.734 9679.163 - 9729.575: 96.3874% ( 6) 00:07:27.734 9729.575 - 9779.988: 96.4373% ( 9) 00:07:27.734 9779.988 - 9830.400: 96.4871% ( 9) 00:07:27.734 9830.400 - 9880.812: 96.5315% ( 8) 00:07:27.734 9880.812 - 9931.225: 96.5869% ( 10) 00:07:27.734 9931.225 - 9981.637: 96.6423% ( 10) 00:07:27.734 9981.637 - 10032.049: 96.7254% ( 15) 00:07:27.734 10032.049 - 10082.462: 96.8418% ( 21) 00:07:27.734 10082.462 - 10132.874: 96.9193% ( 14) 00:07:27.734 10132.874 - 10183.286: 97.0135% ( 17) 00:07:27.734 10183.286 - 10233.698: 97.0800% ( 12) 00:07:27.734 10233.698 - 10284.111: 97.2739% ( 35) 00:07:27.734 10284.111 - 10334.523: 97.3958% ( 22) 00:07:27.734 10334.523 - 10384.935: 97.6341% ( 43) 00:07:27.734 10384.935 - 10435.348: 97.7006% ( 12) 00:07:27.734 10435.348 - 10485.760: 97.7615% ( 11) 00:07:27.734 10485.760 - 10536.172: 97.8003% ( 7) 00:07:27.734 10536.172 - 10586.585: 97.8446% ( 8) 00:07:27.734 10586.585 - 10636.997: 97.8945% ( 9) 00:07:27.734 10636.997 - 10687.409: 97.9333% ( 7) 00:07:27.734 10687.409 - 10737.822: 97.9665% ( 6) 00:07:27.734 10737.822 - 10788.234: 97.9998% ( 6) 00:07:27.734 10788.234 - 10838.646: 98.0109% ( 2) 00:07:27.734 10838.646 - 10889.058: 98.0441% ( 6) 00:07:27.734 10889.058 - 10939.471: 98.0829% ( 7) 00:07:27.734 10939.471 - 10989.883: 98.1272% ( 8) 00:07:27.734 10989.883 - 11040.295: 98.2990% ( 31) 00:07:27.734 11040.295 - 11090.708: 98.4043% ( 19) 00:07:27.734 11090.708 - 11141.120: 98.4375% ( 6) 00:07:27.734 11141.120 - 11191.532: 98.4597% ( 4) 00:07:27.734 11191.532 - 11241.945: 98.4874% ( 5) 00:07:27.734 11241.945 - 11292.357: 98.5317% ( 8) 00:07:27.734 11292.357 - 11342.769: 98.5594% ( 5) 00:07:27.734 11342.769 - 11393.182: 98.7256% ( 30) 00:07:27.734 11393.182 - 11443.594: 98.8198% ( 17) 00:07:27.734 11443.594 - 11494.006: 98.8531% ( 6) 00:07:27.734 11494.006 - 11544.418: 98.8863% ( 6) 00:07:27.734 11544.418 - 11594.831: 98.9251% ( 7) 00:07:27.734 11594.831 - 11645.243: 98.9362% ( 2) 00:07:27.734 12048.542 - 12098.954: 98.9417% ( 1) 00:07:27.734 12250.191 - 12300.603: 98.9473% ( 1) 00:07:27.734 12300.603 - 12351.015: 98.9639% ( 3) 00:07:27.734 12351.015 - 12401.428: 98.9805% ( 3) 00:07:27.734 12401.428 - 12451.840: 99.0027% ( 4) 00:07:27.734 12451.840 - 12502.252: 99.0193% ( 3) 00:07:27.734 12502.252 - 12552.665: 99.2132% ( 35) 00:07:27.734 12552.665 - 12603.077: 99.2409% ( 5) 00:07:27.734 12603.077 - 12653.489: 99.2520% ( 2) 00:07:27.734 12653.489 - 12703.902: 99.2686% ( 3) 00:07:27.734 12703.902 - 12754.314: 99.2852% ( 3) 00:07:27.734 12754.314 - 12804.726: 99.2908% ( 1) 00:07:27.734 20064.098 - 20164.923: 99.3074% ( 3) 00:07:27.734 20164.923 - 20265.748: 99.3240% ( 3) 00:07:27.734 20265.748 - 20366.572: 99.3406% ( 3) 00:07:27.734 20366.572 - 20467.397: 99.3628% ( 4) 00:07:27.734 20467.397 - 20568.222: 99.3794% ( 3) 00:07:27.734 20568.222 - 20669.046: 99.4016% ( 4) 00:07:27.734 20669.046 - 20769.871: 99.4238% ( 4) 00:07:27.734 20769.871 - 20870.695: 99.4459% ( 4) 00:07:27.734 20870.695 - 20971.520: 99.4681% ( 4) 00:07:27.734 20971.520 - 21072.345: 99.4847% ( 3) 00:07:27.734 21072.345 - 21173.169: 99.5124% ( 5) 00:07:27.734 21173.169 - 21273.994: 99.5346% ( 4) 00:07:27.734 21273.994 - 21374.818: 99.5567% ( 4) 00:07:27.734 21374.818 - 21475.643: 99.5789% ( 4) 00:07:27.734 21475.643 - 21576.468: 99.6011% ( 4) 00:07:27.734 21576.468 - 21677.292: 99.6232% ( 4) 00:07:27.734 21677.292 - 21778.117: 99.6454% ( 4) 00:07:27.734 24500.382 - 24601.206: 99.6509% ( 1) 00:07:27.734 24601.206 - 24702.031: 99.6731% ( 4) 00:07:27.734 24702.031 - 24802.855: 99.6953% ( 4) 00:07:27.734 24802.855 - 24903.680: 99.7174% ( 4) 00:07:27.734 24903.680 - 25004.505: 99.7396% ( 4) 00:07:27.734 25004.505 - 25105.329: 99.7617% ( 4) 00:07:27.734 25105.329 - 25206.154: 99.7839% ( 4) 00:07:27.734 25206.154 - 25306.978: 99.8061% ( 4) 00:07:27.735 25306.978 - 25407.803: 99.8282% ( 4) 00:07:27.735 25407.803 - 25508.628: 99.8504% ( 4) 00:07:27.735 25508.628 - 25609.452: 99.8726% ( 4) 00:07:27.735 25609.452 - 25710.277: 99.8947% ( 4) 00:07:27.735 25710.277 - 25811.102: 99.9169% ( 4) 00:07:27.735 25811.102 - 26012.751: 99.9557% ( 7) 00:07:27.735 26012.751 - 26214.400: 100.0000% ( 8) 00:07:27.735 00:07:27.735 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:27.735 ============================================================================== 00:07:27.735 Range in us Cumulative IO count 00:07:27.735 5570.560 - 5595.766: 0.0111% ( 2) 00:07:27.735 5595.766 - 5620.972: 0.0222% ( 2) 00:07:27.735 5620.972 - 5646.178: 0.0443% ( 4) 00:07:27.735 5646.178 - 5671.385: 0.0776% ( 6) 00:07:27.735 5671.385 - 5696.591: 0.1108% ( 6) 00:07:27.735 5696.591 - 5721.797: 0.1551% ( 8) 00:07:27.735 5721.797 - 5747.003: 0.1939% ( 7) 00:07:27.735 5747.003 - 5772.209: 0.3103% ( 21) 00:07:27.735 5772.209 - 5797.415: 0.4654% ( 28) 00:07:27.735 5797.415 - 5822.622: 0.5319% ( 12) 00:07:27.735 5822.622 - 5847.828: 0.6095% ( 14) 00:07:27.735 5847.828 - 5873.034: 0.8699% ( 47) 00:07:27.735 5873.034 - 5898.240: 1.0195% ( 27) 00:07:27.735 5898.240 - 5923.446: 1.3963% ( 68) 00:07:27.735 5923.446 - 5948.652: 1.5736% ( 32) 00:07:27.735 5948.652 - 5973.858: 1.7398% ( 30) 00:07:27.735 5973.858 - 5999.065: 1.9282% ( 34) 00:07:27.735 5999.065 - 6024.271: 2.1886% ( 47) 00:07:27.735 6024.271 - 6049.477: 2.6485% ( 83) 00:07:27.735 6049.477 - 6074.683: 3.3522% ( 127) 00:07:27.735 6074.683 - 6099.889: 3.8121% ( 83) 00:07:27.735 6099.889 - 6125.095: 4.4603% ( 117) 00:07:27.735 6125.095 - 6150.302: 4.8980% ( 79) 00:07:27.735 6150.302 - 6175.508: 5.6461% ( 135) 00:07:27.735 6175.508 - 6200.714: 6.7431% ( 198) 00:07:27.735 6200.714 - 6225.920: 7.6518% ( 164) 00:07:27.735 6225.920 - 6251.126: 8.5827% ( 168) 00:07:27.735 6251.126 - 6276.332: 9.3418% ( 137) 00:07:27.735 6276.332 - 6301.538: 10.4444% ( 199) 00:07:27.735 6301.538 - 6326.745: 11.4195% ( 176) 00:07:27.735 6326.745 - 6351.951: 12.9100% ( 269) 00:07:27.735 6351.951 - 6377.157: 14.4670% ( 281) 00:07:27.735 6377.157 - 6402.363: 16.5226% ( 371) 00:07:27.735 6402.363 - 6427.569: 18.4508% ( 348) 00:07:27.735 6427.569 - 6452.775: 21.0827% ( 475) 00:07:27.735 6452.775 - 6503.188: 26.2356% ( 930) 00:07:27.735 6503.188 - 6553.600: 31.5381% ( 957) 00:07:27.735 6553.600 - 6604.012: 36.7575% ( 942) 00:07:27.735 6604.012 - 6654.425: 42.7693% ( 1085) 00:07:27.735 6654.425 - 6704.837: 49.0304% ( 1130) 00:07:27.735 6704.837 - 6755.249: 53.8508% ( 870) 00:07:27.735 6755.249 - 6805.662: 58.6602% ( 868) 00:07:27.735 6805.662 - 6856.074: 63.1649% ( 813) 00:07:27.735 6856.074 - 6906.486: 67.1764% ( 724) 00:07:27.735 6906.486 - 6956.898: 70.4344% ( 588) 00:07:27.735 6956.898 - 7007.311: 72.5177% ( 376) 00:07:27.735 7007.311 - 7057.723: 74.3517% ( 331) 00:07:27.735 7057.723 - 7108.135: 75.9973% ( 297) 00:07:27.735 7108.135 - 7158.548: 78.0530% ( 371) 00:07:27.735 7158.548 - 7208.960: 79.2110% ( 209) 00:07:27.735 7208.960 - 7259.372: 80.2970% ( 196) 00:07:27.735 7259.372 - 7309.785: 81.0727% ( 140) 00:07:27.735 7309.785 - 7360.197: 82.2307% ( 209) 00:07:27.735 7360.197 - 7410.609: 83.1117% ( 159) 00:07:27.735 7410.609 - 7461.022: 84.0869% ( 176) 00:07:27.735 7461.022 - 7511.434: 84.7352% ( 117) 00:07:27.735 7511.434 - 7561.846: 85.3336% ( 108) 00:07:27.735 7561.846 - 7612.258: 85.8322% ( 90) 00:07:27.735 7612.258 - 7662.671: 86.3697% ( 97) 00:07:27.735 7662.671 - 7713.083: 87.0401% ( 121) 00:07:27.735 7713.083 - 7763.495: 87.9322% ( 161) 00:07:27.735 7763.495 - 7813.908: 88.5860% ( 118) 00:07:27.735 7813.908 - 7864.320: 89.5445% ( 173) 00:07:27.735 7864.320 - 7914.732: 90.1707% ( 113) 00:07:27.735 7914.732 - 7965.145: 90.6859% ( 93) 00:07:27.735 7965.145 - 8015.557: 91.2068% ( 94) 00:07:27.735 8015.557 - 8065.969: 91.5448% ( 61) 00:07:27.735 8065.969 - 8116.382: 91.8994% ( 64) 00:07:27.735 8116.382 - 8166.794: 92.1820% ( 51) 00:07:27.735 8166.794 - 8217.206: 92.5698% ( 70) 00:07:27.735 8217.206 - 8267.618: 92.8524% ( 51) 00:07:27.735 8267.618 - 8318.031: 93.0629% ( 38) 00:07:27.735 8318.031 - 8368.443: 93.1738% ( 20) 00:07:27.735 8368.443 - 8418.855: 93.3289% ( 28) 00:07:27.735 8418.855 - 8469.268: 93.4674% ( 25) 00:07:27.735 8469.268 - 8519.680: 93.7943% ( 59) 00:07:27.735 8519.680 - 8570.092: 93.9384% ( 26) 00:07:27.735 8570.092 - 8620.505: 94.1102% ( 31) 00:07:27.735 8620.505 - 8670.917: 94.2154% ( 19) 00:07:27.735 8670.917 - 8721.329: 94.3373% ( 22) 00:07:27.735 8721.329 - 8771.742: 94.4481% ( 20) 00:07:27.735 8771.742 - 8822.154: 94.5091% ( 11) 00:07:27.735 8822.154 - 8872.566: 94.6365% ( 23) 00:07:27.735 8872.566 - 8922.978: 94.8360% ( 36) 00:07:27.735 8922.978 - 8973.391: 94.9634% ( 23) 00:07:27.735 8973.391 - 9023.803: 95.0632% ( 18) 00:07:27.735 9023.803 - 9074.215: 95.1795% ( 21) 00:07:27.735 9074.215 - 9124.628: 95.2737% ( 17) 00:07:27.735 9124.628 - 9175.040: 95.4178% ( 26) 00:07:27.735 9175.040 - 9225.452: 95.5230% ( 19) 00:07:27.735 9225.452 - 9275.865: 95.7114% ( 34) 00:07:27.735 9275.865 - 9326.277: 95.8610% ( 27) 00:07:27.735 9326.277 - 9376.689: 95.9774% ( 21) 00:07:27.735 9376.689 - 9427.102: 96.1547% ( 32) 00:07:27.735 9427.102 - 9477.514: 96.3154% ( 29) 00:07:27.735 9477.514 - 9527.926: 96.4040% ( 16) 00:07:27.735 9527.926 - 9578.338: 96.5536% ( 27) 00:07:27.735 9578.338 - 9628.751: 96.6977% ( 26) 00:07:27.735 9628.751 - 9679.163: 96.7753% ( 14) 00:07:27.735 9679.163 - 9729.575: 96.8362% ( 11) 00:07:27.735 9729.575 - 9779.988: 96.8861% ( 9) 00:07:27.735 9779.988 - 9830.400: 96.9415% ( 10) 00:07:27.735 9830.400 - 9880.812: 96.9747% ( 6) 00:07:27.735 9880.812 - 9931.225: 96.9969% ( 4) 00:07:27.735 9931.225 - 9981.637: 97.0246% ( 5) 00:07:27.735 9981.637 - 10032.049: 97.0578% ( 6) 00:07:27.735 10032.049 - 10082.462: 97.0966% ( 7) 00:07:27.735 10082.462 - 10132.874: 97.1354% ( 7) 00:07:27.735 10132.874 - 10183.286: 97.2019% ( 12) 00:07:27.735 10183.286 - 10233.698: 97.2739% ( 13) 00:07:27.735 10233.698 - 10284.111: 97.3570% ( 15) 00:07:27.735 10284.111 - 10334.523: 97.5953% ( 43) 00:07:27.735 10334.523 - 10384.935: 97.6729% ( 14) 00:07:27.735 10384.935 - 10435.348: 97.8336% ( 29) 00:07:27.735 10435.348 - 10485.760: 97.8779% ( 8) 00:07:27.735 10485.760 - 10536.172: 97.9277% ( 9) 00:07:27.735 10536.172 - 10586.585: 97.9776% ( 9) 00:07:27.735 10586.585 - 10636.997: 98.0109% ( 6) 00:07:27.735 10636.997 - 10687.409: 98.0386% ( 5) 00:07:27.736 10687.409 - 10737.822: 98.0552% ( 3) 00:07:27.736 10737.822 - 10788.234: 98.0718% ( 3) 00:07:27.736 10788.234 - 10838.646: 98.0884% ( 3) 00:07:27.736 10838.646 - 10889.058: 98.1106% ( 4) 00:07:27.736 10889.058 - 10939.471: 98.1383% ( 5) 00:07:27.736 10939.471 - 10989.883: 98.1882% ( 9) 00:07:27.736 10989.883 - 11040.295: 98.2325% ( 8) 00:07:27.736 11040.295 - 11090.708: 98.2824% ( 9) 00:07:27.736 11090.708 - 11141.120: 98.4541% ( 31) 00:07:27.736 11141.120 - 11191.532: 98.5040% ( 9) 00:07:27.736 11191.532 - 11241.945: 98.5649% ( 11) 00:07:27.736 11241.945 - 11292.357: 98.6093% ( 8) 00:07:27.736 11292.357 - 11342.769: 98.6480% ( 7) 00:07:27.736 11342.769 - 11393.182: 98.6813% ( 6) 00:07:27.736 11393.182 - 11443.594: 98.7589% ( 14) 00:07:27.736 11443.594 - 11494.006: 98.8918% ( 24) 00:07:27.736 11494.006 - 11544.418: 98.9085% ( 3) 00:07:27.736 11544.418 - 11594.831: 98.9195% ( 2) 00:07:27.736 11594.831 - 11645.243: 98.9362% ( 3) 00:07:27.736 12351.015 - 12401.428: 98.9417% ( 1) 00:07:27.736 12401.428 - 12451.840: 98.9971% ( 10) 00:07:27.736 12451.840 - 12502.252: 99.0082% ( 2) 00:07:27.736 12502.252 - 12552.665: 99.0304% ( 4) 00:07:27.736 12552.665 - 12603.077: 99.0470% ( 3) 00:07:27.736 12603.077 - 12653.489: 99.0636% ( 3) 00:07:27.736 12653.489 - 12703.902: 99.0802% ( 3) 00:07:27.736 12703.902 - 12754.314: 99.0969% ( 3) 00:07:27.736 12754.314 - 12804.726: 99.1190% ( 4) 00:07:27.736 12804.726 - 12855.138: 99.1356% ( 3) 00:07:27.736 12855.138 - 12905.551: 99.1523% ( 3) 00:07:27.736 12905.551 - 13006.375: 99.1910% ( 7) 00:07:27.736 13006.375 - 13107.200: 99.2298% ( 7) 00:07:27.736 13107.200 - 13208.025: 99.2742% ( 8) 00:07:27.736 13208.025 - 13308.849: 99.2908% ( 3) 00:07:27.736 18350.080 - 18450.905: 99.3019% ( 2) 00:07:27.736 18450.905 - 18551.729: 99.3240% ( 4) 00:07:27.736 18551.729 - 18652.554: 99.3462% ( 4) 00:07:27.736 18652.554 - 18753.378: 99.3628% ( 3) 00:07:27.736 18753.378 - 18854.203: 99.3905% ( 5) 00:07:27.736 18854.203 - 18955.028: 99.4127% ( 4) 00:07:27.736 18955.028 - 19055.852: 99.4348% ( 4) 00:07:27.736 19055.852 - 19156.677: 99.4570% ( 4) 00:07:27.736 19156.677 - 19257.502: 99.4792% ( 4) 00:07:27.736 19257.502 - 19358.326: 99.5013% ( 4) 00:07:27.736 19358.326 - 19459.151: 99.5235% ( 4) 00:07:27.736 19459.151 - 19559.975: 99.5457% ( 4) 00:07:27.736 19559.975 - 19660.800: 99.5623% ( 3) 00:07:27.736 19660.800 - 19761.625: 99.5844% ( 4) 00:07:27.736 19761.625 - 19862.449: 99.6066% ( 4) 00:07:27.736 19862.449 - 19963.274: 99.6343% ( 5) 00:07:27.736 19963.274 - 20064.098: 99.6454% ( 2) 00:07:27.736 22887.188 - 22988.012: 99.6620% ( 3) 00:07:27.736 22988.012 - 23088.837: 99.6786% ( 3) 00:07:27.736 23088.837 - 23189.662: 99.7063% ( 5) 00:07:27.736 23189.662 - 23290.486: 99.7285% ( 4) 00:07:27.736 23290.486 - 23391.311: 99.7451% ( 3) 00:07:27.736 23391.311 - 23492.135: 99.7728% ( 5) 00:07:27.736 23492.135 - 23592.960: 99.7950% ( 4) 00:07:27.736 23592.960 - 23693.785: 99.8172% ( 4) 00:07:27.736 23693.785 - 23794.609: 99.8393% ( 4) 00:07:27.736 23794.609 - 23895.434: 99.8615% ( 4) 00:07:27.736 23895.434 - 23996.258: 99.8781% ( 3) 00:07:27.736 23996.258 - 24097.083: 99.9003% ( 4) 00:07:27.736 24097.083 - 24197.908: 99.9224% ( 4) 00:07:27.736 24197.908 - 24298.732: 99.9446% ( 4) 00:07:27.736 24298.732 - 24399.557: 99.9668% ( 4) 00:07:27.736 24399.557 - 24500.382: 99.9834% ( 3) 00:07:27.736 24500.382 - 24601.206: 100.0000% ( 3) 00:07:27.736 00:07:27.736 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:27.736 ============================================================================== 00:07:27.736 Range in us Cumulative IO count 00:07:27.736 5469.735 - 5494.942: 0.0055% ( 1) 00:07:27.736 5494.942 - 5520.148: 0.0166% ( 2) 00:07:27.736 5520.148 - 5545.354: 0.0221% ( 1) 00:07:27.736 5570.560 - 5595.766: 0.0276% ( 1) 00:07:27.736 5595.766 - 5620.972: 0.0442% ( 3) 00:07:27.736 5620.972 - 5646.178: 0.0607% ( 3) 00:07:27.736 5646.178 - 5671.385: 0.0828% ( 4) 00:07:27.736 5671.385 - 5696.591: 0.0939% ( 2) 00:07:27.736 5696.591 - 5721.797: 0.1325% ( 7) 00:07:27.736 5721.797 - 5747.003: 0.1822% ( 9) 00:07:27.736 5747.003 - 5772.209: 0.2374% ( 10) 00:07:27.736 5772.209 - 5797.415: 0.3313% ( 17) 00:07:27.736 5797.415 - 5822.622: 0.5190% ( 34) 00:07:27.736 5822.622 - 5847.828: 0.6018% ( 15) 00:07:27.736 5847.828 - 5873.034: 0.7343% ( 24) 00:07:27.736 5873.034 - 5898.240: 1.1153% ( 69) 00:07:27.736 5898.240 - 5923.446: 1.2754% ( 29) 00:07:27.736 5923.446 - 5948.652: 1.4852% ( 38) 00:07:27.736 5948.652 - 5973.858: 1.7447% ( 47) 00:07:27.736 5973.858 - 5999.065: 2.0484% ( 55) 00:07:27.736 5999.065 - 6024.271: 2.3852% ( 61) 00:07:27.736 6024.271 - 6049.477: 2.9814% ( 108) 00:07:27.736 6049.477 - 6074.683: 3.5998% ( 112) 00:07:27.736 6074.683 - 6099.889: 4.1630% ( 102) 00:07:27.736 6099.889 - 6125.095: 4.8034% ( 116) 00:07:27.736 6125.095 - 6150.302: 5.2562% ( 82) 00:07:27.736 6150.302 - 6175.508: 6.0402% ( 142) 00:07:27.736 6175.508 - 6200.714: 6.7083% ( 121) 00:07:27.736 6200.714 - 6225.920: 7.5751% ( 157) 00:07:27.736 6225.920 - 6251.126: 8.6628% ( 197) 00:07:27.736 6251.126 - 6276.332: 9.6842% ( 185) 00:07:27.736 6276.332 - 6301.538: 10.7332% ( 190) 00:07:27.736 6301.538 - 6326.745: 11.8540% ( 203) 00:07:27.736 6326.745 - 6351.951: 13.5490% ( 307) 00:07:27.736 6351.951 - 6377.157: 14.9735% ( 258) 00:07:27.736 6377.157 - 6402.363: 16.5194% ( 280) 00:07:27.736 6402.363 - 6427.569: 18.3690% ( 335) 00:07:27.736 6427.569 - 6452.775: 20.5996% ( 404) 00:07:27.736 6452.775 - 6503.188: 26.3913% ( 1049) 00:07:27.736 6503.188 - 6553.600: 30.8580% ( 809) 00:07:27.736 6553.600 - 6604.012: 36.1307% ( 955) 00:07:27.736 6604.012 - 6654.425: 42.0439% ( 1071) 00:07:27.736 6654.425 - 6704.837: 48.0897% ( 1095) 00:07:27.736 6704.837 - 6755.249: 53.5115% ( 982) 00:07:27.736 6755.249 - 6805.662: 57.9119% ( 797) 00:07:27.736 6805.662 - 6856.074: 62.9086% ( 905) 00:07:27.736 6856.074 - 6906.486: 66.5139% ( 653) 00:07:27.736 6906.486 - 6956.898: 69.4843% ( 538) 00:07:27.736 6956.898 - 7007.311: 72.1124% ( 476) 00:07:27.736 7007.311 - 7057.723: 74.0062% ( 343) 00:07:27.736 7057.723 - 7108.135: 75.9055% ( 344) 00:07:27.736 7108.135 - 7158.548: 78.1250% ( 402) 00:07:27.736 7158.548 - 7208.960: 79.3562% ( 223) 00:07:27.736 7208.960 - 7259.372: 80.1955% ( 152) 00:07:27.736 7259.372 - 7309.785: 81.3770% ( 214) 00:07:27.736 7309.785 - 7360.197: 82.1334% ( 137) 00:07:27.736 7360.197 - 7410.609: 83.2376% ( 200) 00:07:27.736 7410.609 - 7461.022: 83.9057% ( 121) 00:07:27.736 7461.022 - 7511.434: 84.5903% ( 124) 00:07:27.736 7511.434 - 7561.846: 85.1204% ( 96) 00:07:27.736 7561.846 - 7612.258: 85.7884% ( 121) 00:07:27.736 7612.258 - 7662.671: 86.7160% ( 168) 00:07:27.736 7662.671 - 7713.083: 87.2736% ( 101) 00:07:27.736 7713.083 - 7763.495: 87.7319% ( 83) 00:07:27.736 7763.495 - 7813.908: 88.2951% ( 102) 00:07:27.736 7813.908 - 7864.320: 88.8527% ( 101) 00:07:27.736 7864.320 - 7914.732: 89.8189% ( 175) 00:07:27.736 7914.732 - 7965.145: 90.4759% ( 119) 00:07:27.736 7965.145 - 8015.557: 90.8955% ( 76) 00:07:27.736 8015.557 - 8065.969: 91.5029% ( 110) 00:07:27.736 8065.969 - 8116.382: 92.0384% ( 97) 00:07:27.736 8116.382 - 8166.794: 92.4525% ( 75) 00:07:27.736 8166.794 - 8217.206: 92.7727% ( 58) 00:07:27.736 8217.206 - 8267.618: 92.9936% ( 40) 00:07:27.736 8267.618 - 8318.031: 93.1427% ( 27) 00:07:27.736 8318.031 - 8368.443: 93.2862% ( 26) 00:07:27.736 8368.443 - 8418.855: 93.4574% ( 31) 00:07:27.736 8418.855 - 8469.268: 93.6120% ( 28) 00:07:27.736 8469.268 - 8519.680: 93.8052% ( 35) 00:07:27.736 8519.680 - 8570.092: 93.9708% ( 30) 00:07:27.736 8570.092 - 8620.505: 94.0813% ( 20) 00:07:27.736 8620.505 - 8670.917: 94.1696% ( 16) 00:07:27.736 8670.917 - 8721.329: 94.2745% ( 19) 00:07:27.736 8721.329 - 8771.742: 94.4346% ( 29) 00:07:27.736 8771.742 - 8822.154: 94.5782% ( 26) 00:07:27.736 8822.154 - 8872.566: 94.6720% ( 17) 00:07:27.736 8872.566 - 8922.978: 94.7328% ( 11) 00:07:27.736 8922.978 - 8973.391: 94.8377% ( 19) 00:07:27.736 8973.391 - 9023.803: 94.9426% ( 19) 00:07:27.736 9023.803 - 9074.215: 95.0917% ( 27) 00:07:27.736 9074.215 - 9124.628: 95.2739% ( 33) 00:07:27.736 9124.628 - 9175.040: 95.3732% ( 18) 00:07:27.736 9175.040 - 9225.452: 95.4505% ( 14) 00:07:27.736 9225.452 - 9275.865: 95.5223% ( 13) 00:07:27.736 9275.865 - 9326.277: 95.5775% ( 10) 00:07:27.736 9326.277 - 9376.689: 95.6272% ( 9) 00:07:27.736 9376.689 - 9427.102: 95.6935% ( 12) 00:07:27.736 9427.102 - 9477.514: 95.7487% ( 10) 00:07:27.736 9477.514 - 9527.926: 95.8260% ( 14) 00:07:27.736 9527.926 - 9578.338: 95.9033% ( 14) 00:07:27.736 9578.338 - 9628.751: 95.9916% ( 16) 00:07:27.736 9628.751 - 9679.163: 96.2732% ( 51) 00:07:27.736 9679.163 - 9729.575: 96.4609% ( 34) 00:07:27.736 9729.575 - 9779.988: 96.6762% ( 39) 00:07:27.736 9779.988 - 9830.400: 96.8308% ( 28) 00:07:27.736 9830.400 - 9880.812: 97.0186% ( 34) 00:07:27.736 9880.812 - 9931.225: 97.1345% ( 21) 00:07:27.736 9931.225 - 9981.637: 97.3609% ( 41) 00:07:27.736 9981.637 - 10032.049: 97.4382% ( 14) 00:07:27.736 10032.049 - 10082.462: 97.4823% ( 8) 00:07:27.736 10082.462 - 10132.874: 97.5265% ( 8) 00:07:27.736 10132.874 - 10183.286: 97.5928% ( 12) 00:07:27.736 10183.286 - 10233.698: 97.6369% ( 8) 00:07:27.736 10233.698 - 10284.111: 97.7032% ( 12) 00:07:27.736 10284.111 - 10334.523: 97.8909% ( 34) 00:07:27.737 10334.523 - 10384.935: 97.9516% ( 11) 00:07:27.737 10384.935 - 10435.348: 97.9958% ( 8) 00:07:27.737 10435.348 - 10485.760: 98.0345% ( 7) 00:07:27.737 10485.760 - 10536.172: 98.0786% ( 8) 00:07:27.737 10536.172 - 10586.585: 98.1173% ( 7) 00:07:27.737 10586.585 - 10636.997: 98.1449% ( 5) 00:07:27.737 10636.997 - 10687.409: 98.1725% ( 5) 00:07:27.737 10687.409 - 10737.822: 98.2001% ( 5) 00:07:27.737 10737.822 - 10788.234: 98.2277% ( 5) 00:07:27.737 10788.234 - 10838.646: 98.2332% ( 1) 00:07:27.737 10838.646 - 10889.058: 98.2443% ( 2) 00:07:27.737 10889.058 - 10939.471: 98.2663% ( 4) 00:07:27.737 10939.471 - 10989.883: 98.3050% ( 7) 00:07:27.737 10989.883 - 11040.295: 98.3602% ( 10) 00:07:27.737 11040.295 - 11090.708: 98.5038% ( 26) 00:07:27.737 11090.708 - 11141.120: 98.5700% ( 12) 00:07:27.737 11141.120 - 11191.532: 98.6528% ( 15) 00:07:27.737 11191.532 - 11241.945: 98.8019% ( 27) 00:07:27.737 11241.945 - 11292.357: 98.8350% ( 6) 00:07:27.737 11292.357 - 11342.769: 98.8571% ( 4) 00:07:27.737 11342.769 - 11393.182: 98.8682% ( 2) 00:07:27.737 11393.182 - 11443.594: 98.8792% ( 2) 00:07:27.737 11443.594 - 11494.006: 98.8958% ( 3) 00:07:27.737 11494.006 - 11544.418: 98.9068% ( 2) 00:07:27.737 11544.418 - 11594.831: 98.9178% ( 2) 00:07:27.737 11594.831 - 11645.243: 98.9289% ( 2) 00:07:27.737 11645.243 - 11695.655: 98.9399% ( 2) 00:07:27.737 12351.015 - 12401.428: 98.9620% ( 4) 00:07:27.737 12401.428 - 12451.840: 98.9786% ( 3) 00:07:27.737 12451.840 - 12502.252: 99.0007% ( 4) 00:07:27.737 12502.252 - 12552.665: 99.0172% ( 3) 00:07:27.737 12552.665 - 12603.077: 99.0393% ( 4) 00:07:27.737 12603.077 - 12653.489: 99.0559% ( 3) 00:07:27.737 12653.489 - 12703.902: 99.0724% ( 3) 00:07:27.737 12703.902 - 12754.314: 99.0890% ( 3) 00:07:27.737 12754.314 - 12804.726: 99.1056% ( 3) 00:07:27.737 12804.726 - 12855.138: 99.1221% ( 3) 00:07:27.737 12855.138 - 12905.551: 99.1387% ( 3) 00:07:27.737 12905.551 - 13006.375: 99.1939% ( 10) 00:07:27.737 13006.375 - 13107.200: 99.2546% ( 11) 00:07:27.737 13107.200 - 13208.025: 99.3154% ( 11) 00:07:27.737 13208.025 - 13308.849: 99.3761% ( 11) 00:07:27.737 13308.849 - 13409.674: 99.4037% ( 5) 00:07:27.737 13409.674 - 13510.498: 99.4258% ( 4) 00:07:27.737 13510.498 - 13611.323: 99.4479% ( 4) 00:07:27.737 13611.323 - 13712.148: 99.4700% ( 4) 00:07:27.737 13712.148 - 13812.972: 99.4920% ( 4) 00:07:27.737 13812.972 - 13913.797: 99.5197% ( 5) 00:07:27.737 13913.797 - 14014.622: 99.5417% ( 4) 00:07:27.737 14014.622 - 14115.446: 99.5638% ( 4) 00:07:27.737 14115.446 - 14216.271: 99.5859% ( 4) 00:07:27.737 14216.271 - 14317.095: 99.6135% ( 5) 00:07:27.737 14317.095 - 14417.920: 99.6356% ( 4) 00:07:27.737 14417.920 - 14518.745: 99.6466% ( 2) 00:07:27.737 17745.132 - 17845.957: 99.6522% ( 1) 00:07:27.737 17845.957 - 17946.782: 99.6742% ( 4) 00:07:27.737 17946.782 - 18047.606: 99.7019% ( 5) 00:07:27.737 18047.606 - 18148.431: 99.7239% ( 4) 00:07:27.737 18148.431 - 18249.255: 99.7460% ( 4) 00:07:27.737 18249.255 - 18350.080: 99.7681% ( 4) 00:07:27.737 18350.080 - 18450.905: 99.7902% ( 4) 00:07:27.737 18450.905 - 18551.729: 99.8123% ( 4) 00:07:27.737 18551.729 - 18652.554: 99.8344% ( 4) 00:07:27.737 18652.554 - 18753.378: 99.8564% ( 4) 00:07:27.737 18753.378 - 18854.203: 99.8785% ( 4) 00:07:27.737 18854.203 - 18955.028: 99.9006% ( 4) 00:07:27.737 18955.028 - 19055.852: 99.9227% ( 4) 00:07:27.737 19055.852 - 19156.677: 99.9448% ( 4) 00:07:27.737 19156.677 - 19257.502: 99.9669% ( 4) 00:07:27.737 19257.502 - 19358.326: 99.9890% ( 4) 00:07:27.737 19358.326 - 19459.151: 100.0000% ( 2) 00:07:27.737 00:07:27.996 09:40:06 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:27.996 00:07:27.996 real 0m2.503s 00:07:27.996 user 0m2.206s 00:07:27.996 sys 0m0.199s 00:07:27.996 09:40:06 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.996 ************************************ 00:07:27.996 END TEST nvme_perf 00:07:27.996 ************************************ 00:07:27.996 09:40:06 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 09:40:06 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:27.996 09:40:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:27.996 09:40:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.996 09:40:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 ************************************ 00:07:27.996 START TEST nvme_hello_world 00:07:27.996 ************************************ 00:07:27.996 09:40:06 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:27.996 Initializing NVMe Controllers 00:07:27.996 Attached to 0000:00:13.0 00:07:27.996 Namespace ID: 1 size: 1GB 00:07:27.996 Attached to 0000:00:10.0 00:07:27.996 Namespace ID: 1 size: 6GB 00:07:27.996 Attached to 0000:00:11.0 00:07:27.996 Namespace ID: 1 size: 5GB 00:07:27.996 Attached to 0000:00:12.0 00:07:27.996 Namespace ID: 1 size: 4GB 00:07:27.996 Namespace ID: 2 size: 4GB 00:07:27.996 Namespace ID: 3 size: 4GB 00:07:27.996 Initialization complete. 00:07:27.996 INFO: using host memory buffer for IO 00:07:27.996 Hello world! 00:07:27.996 INFO: using host memory buffer for IO 00:07:27.996 Hello world! 00:07:27.996 INFO: using host memory buffer for IO 00:07:27.996 Hello world! 00:07:27.996 INFO: using host memory buffer for IO 00:07:27.996 Hello world! 00:07:27.996 INFO: using host memory buffer for IO 00:07:27.996 Hello world! 00:07:27.996 INFO: using host memory buffer for IO 00:07:27.996 Hello world! 00:07:27.996 00:07:27.996 real 0m0.197s 00:07:27.996 user 0m0.085s 00:07:27.996 sys 0m0.082s 00:07:27.996 09:40:06 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.996 09:40:06 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 ************************************ 00:07:27.996 END TEST nvme_hello_world 00:07:27.996 ************************************ 00:07:27.996 09:40:06 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:27.996 09:40:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.996 09:40:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.996 09:40:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.256 ************************************ 00:07:28.256 START TEST nvme_sgl 00:07:28.256 ************************************ 00:07:28.256 09:40:06 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:28.256 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:28.256 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:28.256 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:28.256 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:28.256 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:28.256 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:28.256 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:28.256 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:28.256 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:28.256 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:28.256 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:28.256 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:28.256 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:28.256 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:28.256 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:28.514 NVMe Readv/Writev Request test 00:07:28.514 Attached to 0000:00:13.0 00:07:28.514 Attached to 0000:00:10.0 00:07:28.514 Attached to 0000:00:11.0 00:07:28.514 Attached to 0000:00:12.0 00:07:28.514 0000:00:10.0: build_io_request_2 test passed 00:07:28.514 0000:00:10.0: build_io_request_4 test passed 00:07:28.514 0000:00:10.0: build_io_request_5 test passed 00:07:28.514 0000:00:10.0: build_io_request_6 test passed 00:07:28.514 0000:00:10.0: build_io_request_7 test passed 00:07:28.514 0000:00:10.0: build_io_request_10 test passed 00:07:28.514 0000:00:11.0: build_io_request_2 test passed 00:07:28.514 0000:00:11.0: build_io_request_4 test passed 00:07:28.514 0000:00:11.0: build_io_request_5 test passed 00:07:28.514 0000:00:11.0: build_io_request_6 test passed 00:07:28.514 0000:00:11.0: build_io_request_7 test passed 00:07:28.514 0000:00:11.0: build_io_request_10 test passed 00:07:28.514 Cleaning up... 00:07:28.514 00:07:28.514 real 0m0.273s 00:07:28.514 user 0m0.152s 00:07:28.514 sys 0m0.080s 00:07:28.514 09:40:07 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.514 09:40:07 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:28.514 ************************************ 00:07:28.514 END TEST nvme_sgl 00:07:28.514 ************************************ 00:07:28.514 09:40:07 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:28.514 09:40:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.514 09:40:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.514 09:40:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.514 ************************************ 00:07:28.514 START TEST nvme_e2edp 00:07:28.514 ************************************ 00:07:28.514 09:40:07 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:28.514 NVMe Write/Read with End-to-End data protection test 00:07:28.514 Attached to 0000:00:13.0 00:07:28.514 Attached to 0000:00:10.0 00:07:28.514 Attached to 0000:00:11.0 00:07:28.514 Attached to 0000:00:12.0 00:07:28.514 Cleaning up... 00:07:28.773 00:07:28.773 real 0m0.211s 00:07:28.773 user 0m0.074s 00:07:28.773 sys 0m0.095s 00:07:28.773 09:40:07 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.773 09:40:07 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:28.773 ************************************ 00:07:28.773 END TEST nvme_e2edp 00:07:28.773 ************************************ 00:07:28.773 09:40:07 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:28.773 09:40:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.773 09:40:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.773 09:40:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.773 ************************************ 00:07:28.773 START TEST nvme_reserve 00:07:28.773 ************************************ 00:07:28.773 09:40:07 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:28.773 ===================================================== 00:07:28.773 NVMe Controller at PCI bus 0, device 19, function 0 00:07:28.773 ===================================================== 00:07:28.773 Reservations: Not Supported 00:07:28.773 ===================================================== 00:07:28.773 NVMe Controller at PCI bus 0, device 16, function 0 00:07:28.773 ===================================================== 00:07:28.773 Reservations: Not Supported 00:07:28.773 ===================================================== 00:07:28.773 NVMe Controller at PCI bus 0, device 17, function 0 00:07:28.773 ===================================================== 00:07:28.773 Reservations: Not Supported 00:07:28.773 ===================================================== 00:07:28.773 NVMe Controller at PCI bus 0, device 18, function 0 00:07:28.773 ===================================================== 00:07:28.773 Reservations: Not Supported 00:07:28.773 Reservation test passed 00:07:28.773 00:07:28.773 real 0m0.209s 00:07:28.773 user 0m0.070s 00:07:28.773 sys 0m0.096s 00:07:28.773 09:40:07 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.773 09:40:07 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:28.773 ************************************ 00:07:28.773 END TEST nvme_reserve 00:07:28.773 ************************************ 00:07:29.031 09:40:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:29.031 09:40:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.031 09:40:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.031 09:40:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.031 ************************************ 00:07:29.031 START TEST nvme_err_injection 00:07:29.031 ************************************ 00:07:29.031 09:40:07 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:29.031 NVMe Error Injection test 00:07:29.031 Attached to 0000:00:13.0 00:07:29.031 Attached to 0000:00:10.0 00:07:29.031 Attached to 0000:00:11.0 00:07:29.031 Attached to 0000:00:12.0 00:07:29.031 0000:00:13.0: get features failed as expected 00:07:29.031 0000:00:10.0: get features failed as expected 00:07:29.031 0000:00:11.0: get features failed as expected 00:07:29.031 0000:00:12.0: get features failed as expected 00:07:29.031 0000:00:13.0: get features successfully as expected 00:07:29.031 0000:00:10.0: get features successfully as expected 00:07:29.031 0000:00:11.0: get features successfully as expected 00:07:29.031 0000:00:12.0: get features successfully as expected 00:07:29.031 0000:00:13.0: read failed as expected 00:07:29.031 0000:00:12.0: read failed as expected 00:07:29.031 0000:00:10.0: read failed as expected 00:07:29.031 0000:00:11.0: read failed as expected 00:07:29.031 0000:00:13.0: read successfully as expected 00:07:29.031 0000:00:10.0: read successfully as expected 00:07:29.031 0000:00:11.0: read successfully as expected 00:07:29.031 0000:00:12.0: read successfully as expected 00:07:29.031 Cleaning up... 00:07:29.031 00:07:29.031 real 0m0.219s 00:07:29.031 user 0m0.078s 00:07:29.031 sys 0m0.099s 00:07:29.031 09:40:07 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.031 09:40:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:29.031 ************************************ 00:07:29.031 END TEST nvme_err_injection 00:07:29.031 ************************************ 00:07:29.290 09:40:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:29.290 09:40:07 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:29.290 09:40:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.290 09:40:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.290 ************************************ 00:07:29.290 START TEST nvme_overhead 00:07:29.290 ************************************ 00:07:29.290 09:40:07 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:30.665 Initializing NVMe Controllers 00:07:30.665 Attached to 0000:00:13.0 00:07:30.665 Attached to 0000:00:10.0 00:07:30.665 Attached to 0000:00:11.0 00:07:30.665 Attached to 0000:00:12.0 00:07:30.665 Initialization complete. Launching workers. 00:07:30.665 submit (in ns) avg, min, max = 11326.6, 9929.2, 64556.2 00:07:30.665 complete (in ns) avg, min, max = 7516.8, 7193.1, 45773.1 00:07:30.665 00:07:30.665 Submit histogram 00:07:30.665 ================ 00:07:30.665 Range in us Cumulative Count 00:07:30.665 9.895 - 9.945: 0.0056% ( 1) 00:07:30.665 9.945 - 9.994: 0.0112% ( 1) 00:07:30.665 10.043 - 10.092: 0.0168% ( 1) 00:07:30.665 10.240 - 10.289: 0.0224% ( 1) 00:07:30.665 10.782 - 10.831: 0.0280% ( 1) 00:07:30.665 10.831 - 10.880: 0.0671% ( 7) 00:07:30.665 10.880 - 10.929: 0.3860% ( 57) 00:07:30.665 10.929 - 10.978: 3.0657% ( 479) 00:07:30.665 10.978 - 11.028: 13.0629% ( 1787) 00:07:30.665 11.028 - 11.077: 32.4755% ( 3470) 00:07:30.665 11.077 - 11.126: 53.9524% ( 3839) 00:07:30.665 11.126 - 11.175: 70.6294% ( 2981) 00:07:30.665 11.175 - 11.225: 80.1846% ( 1708) 00:07:30.665 11.225 - 11.274: 85.0070% ( 862) 00:07:30.665 11.274 - 11.323: 87.9441% ( 525) 00:07:30.665 11.323 - 11.372: 89.7007% ( 314) 00:07:30.665 11.372 - 11.422: 90.8476% ( 205) 00:07:30.666 11.422 - 11.471: 91.6643% ( 146) 00:07:30.666 11.471 - 11.520: 92.1902% ( 94) 00:07:30.666 11.520 - 11.569: 92.6937% ( 90) 00:07:30.666 11.569 - 11.618: 93.0406% ( 62) 00:07:30.666 11.618 - 11.668: 93.3706% ( 59) 00:07:30.666 11.668 - 11.717: 93.6392% ( 48) 00:07:30.666 11.717 - 11.766: 93.9580% ( 57) 00:07:30.666 11.766 - 11.815: 94.1762% ( 39) 00:07:30.666 11.815 - 11.865: 94.4112% ( 42) 00:07:30.666 11.865 - 11.914: 94.6238% ( 38) 00:07:30.666 11.914 - 11.963: 94.7748% ( 27) 00:07:30.666 11.963 - 12.012: 94.9147% ( 25) 00:07:30.666 12.012 - 12.062: 95.0601% ( 26) 00:07:30.666 12.062 - 12.111: 95.2839% ( 40) 00:07:30.666 12.111 - 12.160: 95.4797% ( 35) 00:07:30.666 12.160 - 12.209: 95.7483% ( 48) 00:07:30.666 12.209 - 12.258: 95.9552% ( 37) 00:07:30.666 12.258 - 12.308: 96.2014% ( 44) 00:07:30.666 12.308 - 12.357: 96.3636% ( 29) 00:07:30.666 12.357 - 12.406: 96.4699% ( 19) 00:07:30.666 12.406 - 12.455: 96.5483% ( 14) 00:07:30.666 12.455 - 12.505: 96.5930% ( 8) 00:07:30.666 12.505 - 12.554: 96.6322% ( 7) 00:07:30.666 12.554 - 12.603: 96.6601% ( 5) 00:07:30.666 12.603 - 12.702: 96.6993% ( 7) 00:07:30.666 12.702 - 12.800: 96.7217% ( 4) 00:07:30.666 12.800 - 12.898: 96.7552% ( 6) 00:07:30.666 12.898 - 12.997: 96.8168% ( 11) 00:07:30.666 12.997 - 13.095: 96.9287% ( 20) 00:07:30.666 13.095 - 13.194: 97.0573% ( 23) 00:07:30.666 13.194 - 13.292: 97.1916% ( 24) 00:07:30.666 13.292 - 13.391: 97.2979% ( 19) 00:07:30.666 13.391 - 13.489: 97.3874% ( 16) 00:07:30.666 13.489 - 13.588: 97.4545% ( 12) 00:07:30.666 13.588 - 13.686: 97.5217% ( 12) 00:07:30.666 13.686 - 13.785: 97.5497% ( 5) 00:07:30.666 13.785 - 13.883: 97.5832% ( 6) 00:07:30.666 13.883 - 13.982: 97.6280% ( 8) 00:07:30.666 13.982 - 14.080: 97.6615% ( 6) 00:07:30.666 14.080 - 14.178: 97.7063% ( 8) 00:07:30.666 14.178 - 14.277: 97.7287% ( 4) 00:07:30.666 14.277 - 14.375: 97.7399% ( 2) 00:07:30.666 14.375 - 14.474: 97.7622% ( 4) 00:07:30.666 14.474 - 14.572: 97.7734% ( 2) 00:07:30.666 14.572 - 14.671: 97.7846% ( 2) 00:07:30.666 14.671 - 14.769: 97.8294% ( 8) 00:07:30.666 14.769 - 14.868: 97.8965% ( 12) 00:07:30.666 14.868 - 14.966: 97.9524% ( 10) 00:07:30.666 14.966 - 15.065: 98.0252% ( 13) 00:07:30.666 15.065 - 15.163: 98.0420% ( 3) 00:07:30.666 15.163 - 15.262: 98.0867% ( 8) 00:07:30.666 15.262 - 15.360: 98.1259% ( 7) 00:07:30.666 15.360 - 15.458: 98.1650% ( 7) 00:07:30.666 15.458 - 15.557: 98.1874% ( 4) 00:07:30.666 15.557 - 15.655: 98.2210% ( 6) 00:07:30.666 15.655 - 15.754: 98.2713% ( 9) 00:07:30.666 15.754 - 15.852: 98.3217% ( 9) 00:07:30.666 15.852 - 15.951: 98.4000% ( 14) 00:07:30.666 15.951 - 16.049: 98.4280% ( 5) 00:07:30.666 16.049 - 16.148: 98.4615% ( 6) 00:07:30.666 16.148 - 16.246: 98.5063% ( 8) 00:07:30.666 16.246 - 16.345: 98.5343% ( 5) 00:07:30.666 16.345 - 16.443: 98.5510% ( 3) 00:07:30.666 16.443 - 16.542: 98.6126% ( 11) 00:07:30.666 16.542 - 16.640: 98.7189% ( 19) 00:07:30.666 16.640 - 16.738: 98.8587% ( 25) 00:07:30.666 16.738 - 16.837: 98.9818% ( 22) 00:07:30.666 16.837 - 16.935: 99.0545% ( 13) 00:07:30.666 16.935 - 17.034: 99.1888% ( 24) 00:07:30.666 17.034 - 17.132: 99.2392% ( 9) 00:07:30.666 17.132 - 17.231: 99.2783% ( 7) 00:07:30.666 17.231 - 17.329: 99.3734% ( 17) 00:07:30.666 17.329 - 17.428: 99.4350% ( 11) 00:07:30.666 17.428 - 17.526: 99.4853% ( 9) 00:07:30.666 17.526 - 17.625: 99.5524% ( 12) 00:07:30.666 17.625 - 17.723: 99.5860% ( 6) 00:07:30.666 17.723 - 17.822: 99.6420% ( 10) 00:07:30.666 17.822 - 17.920: 99.6587% ( 3) 00:07:30.666 17.920 - 18.018: 99.6699% ( 2) 00:07:30.666 18.018 - 18.117: 99.6923% ( 4) 00:07:30.666 18.117 - 18.215: 99.7091% ( 3) 00:07:30.666 18.215 - 18.314: 99.7203% ( 2) 00:07:30.666 18.412 - 18.511: 99.7259% ( 1) 00:07:30.666 18.511 - 18.609: 99.7483% ( 4) 00:07:30.666 18.609 - 18.708: 99.7538% ( 1) 00:07:30.666 18.708 - 18.806: 99.7594% ( 1) 00:07:30.666 18.806 - 18.905: 99.7650% ( 1) 00:07:30.666 18.905 - 19.003: 99.7706% ( 1) 00:07:30.666 19.102 - 19.200: 99.7762% ( 1) 00:07:30.666 19.200 - 19.298: 99.7818% ( 1) 00:07:30.666 19.791 - 19.889: 99.7986% ( 3) 00:07:30.666 19.988 - 20.086: 99.8042% ( 1) 00:07:30.666 20.283 - 20.382: 99.8098% ( 1) 00:07:30.666 20.480 - 20.578: 99.8210% ( 2) 00:07:30.666 20.874 - 20.972: 99.8266% ( 1) 00:07:30.666 20.972 - 21.071: 99.8322% ( 1) 00:07:30.666 21.169 - 21.268: 99.8434% ( 2) 00:07:30.666 21.465 - 21.563: 99.8490% ( 1) 00:07:30.666 21.563 - 21.662: 99.8601% ( 2) 00:07:30.666 21.957 - 22.055: 99.8657% ( 1) 00:07:30.666 22.154 - 22.252: 99.8713% ( 1) 00:07:30.666 22.351 - 22.449: 99.8769% ( 1) 00:07:30.666 22.548 - 22.646: 99.8881% ( 2) 00:07:30.666 23.138 - 23.237: 99.8993% ( 2) 00:07:30.666 23.237 - 23.335: 99.9049% ( 1) 00:07:30.666 23.532 - 23.631: 99.9105% ( 1) 00:07:30.666 23.926 - 24.025: 99.9161% ( 1) 00:07:30.666 24.025 - 24.123: 99.9217% ( 1) 00:07:30.666 24.418 - 24.517: 99.9273% ( 1) 00:07:30.666 25.108 - 25.206: 99.9329% ( 1) 00:07:30.666 25.206 - 25.403: 99.9385% ( 1) 00:07:30.666 26.782 - 26.978: 99.9441% ( 1) 00:07:30.666 26.978 - 27.175: 99.9497% ( 1) 00:07:30.666 27.372 - 27.569: 99.9552% ( 1) 00:07:30.666 27.766 - 27.963: 99.9608% ( 1) 00:07:30.666 28.751 - 28.948: 99.9664% ( 1) 00:07:30.666 38.597 - 38.794: 99.9720% ( 1) 00:07:30.666 44.308 - 44.505: 99.9776% ( 1) 00:07:30.666 51.594 - 51.988: 99.9832% ( 1) 00:07:30.666 58.289 - 58.683: 99.9944% ( 2) 00:07:30.666 64.197 - 64.591: 100.0000% ( 1) 00:07:30.666 00:07:30.666 Complete histogram 00:07:30.666 ================== 00:07:30.666 Range in us Cumulative Count 00:07:30.666 7.188 - 7.237: 0.0615% ( 11) 00:07:30.666 7.237 - 7.286: 2.9259% ( 512) 00:07:30.666 7.286 - 7.335: 21.2308% ( 3272) 00:07:30.666 7.335 - 7.385: 57.4266% ( 6470) 00:07:30.666 7.385 - 7.434: 81.7846% ( 4354) 00:07:30.666 7.434 - 7.483: 91.0937% ( 1664) 00:07:30.666 7.483 - 7.532: 94.3720% ( 586) 00:07:30.666 7.532 - 7.582: 96.0280% ( 296) 00:07:30.666 7.582 - 7.631: 96.8000% ( 138) 00:07:30.666 7.631 - 7.680: 97.2084% ( 73) 00:07:30.666 7.680 - 7.729: 97.3650% ( 28) 00:07:30.666 7.729 - 7.778: 97.4042% ( 7) 00:07:30.666 7.778 - 7.828: 97.4378% ( 6) 00:07:30.666 7.828 - 7.877: 97.4601% ( 4) 00:07:30.666 7.877 - 7.926: 97.4881% ( 5) 00:07:30.666 7.926 - 7.975: 97.5161% ( 5) 00:07:30.666 7.975 - 8.025: 97.5329% ( 3) 00:07:30.666 8.025 - 8.074: 97.5832% ( 9) 00:07:30.666 8.074 - 8.123: 97.6056% ( 4) 00:07:30.666 8.123 - 8.172: 97.6224% ( 3) 00:07:30.666 8.172 - 8.222: 97.6448% ( 4) 00:07:30.666 8.222 - 8.271: 97.6727% ( 5) 00:07:30.666 8.271 - 8.320: 97.7175% ( 8) 00:07:30.666 8.320 - 8.369: 97.7343% ( 3) 00:07:30.666 8.369 - 8.418: 97.7455% ( 2) 00:07:30.666 8.418 - 8.468: 97.7678% ( 4) 00:07:30.666 8.468 - 8.517: 97.7846% ( 3) 00:07:30.666 8.566 - 8.615: 97.7958% ( 2) 00:07:30.666 8.615 - 8.665: 97.8014% ( 1) 00:07:30.666 8.665 - 8.714: 97.8070% ( 1) 00:07:30.666 8.714 - 8.763: 97.8126% ( 1) 00:07:30.666 8.812 - 8.862: 97.8294% ( 3) 00:07:30.666 8.862 - 8.911: 97.8350% ( 1) 00:07:30.666 8.960 - 9.009: 97.8406% ( 1) 00:07:30.666 9.206 - 9.255: 97.8517% ( 2) 00:07:30.666 9.452 - 9.502: 97.8573% ( 1) 00:07:30.666 9.551 - 9.600: 97.8629% ( 1) 00:07:30.666 9.600 - 9.649: 97.8685% ( 1) 00:07:30.666 9.698 - 9.748: 97.8741% ( 1) 00:07:30.666 9.748 - 9.797: 97.8909% ( 3) 00:07:30.666 9.895 - 9.945: 97.8965% ( 1) 00:07:30.666 9.945 - 9.994: 97.9021% ( 1) 00:07:30.666 9.994 - 10.043: 97.9133% ( 2) 00:07:30.666 10.043 - 10.092: 97.9245% ( 2) 00:07:30.666 10.092 - 10.142: 97.9301% ( 1) 00:07:30.666 10.191 - 10.240: 97.9469% ( 3) 00:07:30.666 10.240 - 10.289: 97.9580% ( 2) 00:07:30.666 10.289 - 10.338: 97.9636% ( 1) 00:07:30.666 10.338 - 10.388: 97.9692% ( 1) 00:07:30.666 10.388 - 10.437: 97.9804% ( 2) 00:07:30.666 10.437 - 10.486: 98.0196% ( 7) 00:07:30.666 10.486 - 10.535: 98.0420% ( 4) 00:07:30.666 10.535 - 10.585: 98.0699% ( 5) 00:07:30.666 10.585 - 10.634: 98.0867% ( 3) 00:07:30.666 10.634 - 10.683: 98.1203% ( 6) 00:07:30.666 10.683 - 10.732: 98.1315% ( 2) 00:07:30.666 10.732 - 10.782: 98.1594% ( 5) 00:07:30.666 10.782 - 10.831: 98.1930% ( 6) 00:07:30.666 10.831 - 10.880: 98.2098% ( 3) 00:07:30.666 10.880 - 10.929: 98.2266% ( 3) 00:07:30.666 10.929 - 10.978: 98.2490% ( 4) 00:07:30.667 10.978 - 11.028: 98.2713% ( 4) 00:07:30.667 11.028 - 11.077: 98.2937% ( 4) 00:07:30.667 11.077 - 11.126: 98.3049% ( 2) 00:07:30.667 11.126 - 11.175: 98.3217% ( 3) 00:07:30.667 11.175 - 11.225: 98.3273% ( 1) 00:07:30.667 11.225 - 11.274: 98.3385% ( 2) 00:07:30.667 11.274 - 11.323: 98.3552% ( 3) 00:07:30.667 11.323 - 11.372: 98.3664% ( 2) 00:07:30.667 11.372 - 11.422: 98.3888% ( 4) 00:07:30.667 11.422 - 11.471: 98.4112% ( 4) 00:07:30.667 11.520 - 11.569: 98.4168% ( 1) 00:07:30.667 11.618 - 11.668: 98.4224% ( 1) 00:07:30.667 11.766 - 11.815: 98.4280% ( 1) 00:07:30.667 11.865 - 11.914: 98.4336% ( 1) 00:07:30.667 11.963 - 12.012: 98.4503% ( 3) 00:07:30.667 12.012 - 12.062: 98.4559% ( 1) 00:07:30.667 12.111 - 12.160: 98.4615% ( 1) 00:07:30.667 12.160 - 12.209: 98.4671% ( 1) 00:07:30.667 12.209 - 12.258: 98.4727% ( 1) 00:07:30.667 12.258 - 12.308: 98.4783% ( 1) 00:07:30.667 12.357 - 12.406: 98.4839% ( 1) 00:07:30.667 12.455 - 12.505: 98.4895% ( 1) 00:07:30.667 12.554 - 12.603: 98.5063% ( 3) 00:07:30.667 12.603 - 12.702: 98.5287% ( 4) 00:07:30.667 12.702 - 12.800: 98.5734% ( 8) 00:07:30.667 12.800 - 12.898: 98.6741% ( 18) 00:07:30.667 12.898 - 12.997: 98.7580% ( 15) 00:07:30.667 12.997 - 13.095: 98.8308% ( 13) 00:07:30.667 13.095 - 13.194: 98.9259% ( 17) 00:07:30.667 13.194 - 13.292: 98.9818% ( 10) 00:07:30.667 13.292 - 13.391: 99.0490% ( 12) 00:07:30.667 13.391 - 13.489: 99.1664% ( 21) 00:07:30.667 13.489 - 13.588: 99.2671% ( 18) 00:07:30.667 13.588 - 13.686: 99.3455% ( 14) 00:07:30.667 13.686 - 13.785: 99.4182% ( 13) 00:07:30.667 13.785 - 13.883: 99.4797% ( 11) 00:07:30.667 13.883 - 13.982: 99.5524% ( 13) 00:07:30.667 13.982 - 14.080: 99.5860% ( 6) 00:07:30.667 14.080 - 14.178: 99.6028% ( 3) 00:07:30.667 14.178 - 14.277: 99.6140% ( 2) 00:07:30.667 14.277 - 14.375: 99.6420% ( 5) 00:07:30.667 14.375 - 14.474: 99.6643% ( 4) 00:07:30.667 14.474 - 14.572: 99.6811% ( 3) 00:07:30.667 14.572 - 14.671: 99.6867% ( 1) 00:07:30.667 14.671 - 14.769: 99.7035% ( 3) 00:07:30.667 14.769 - 14.868: 99.7147% ( 2) 00:07:30.667 14.966 - 15.065: 99.7203% ( 1) 00:07:30.667 15.065 - 15.163: 99.7259% ( 1) 00:07:30.667 15.458 - 15.557: 99.7427% ( 3) 00:07:30.667 15.557 - 15.655: 99.7483% ( 1) 00:07:30.667 15.852 - 15.951: 99.7594% ( 2) 00:07:30.667 15.951 - 16.049: 99.7650% ( 1) 00:07:30.667 16.049 - 16.148: 99.7762% ( 2) 00:07:30.667 16.246 - 16.345: 99.7874% ( 2) 00:07:30.667 16.345 - 16.443: 99.7930% ( 1) 00:07:30.667 16.443 - 16.542: 99.8042% ( 2) 00:07:30.667 16.542 - 16.640: 99.8098% ( 1) 00:07:30.667 16.738 - 16.837: 99.8210% ( 2) 00:07:30.667 16.837 - 16.935: 99.8322% ( 2) 00:07:30.667 17.231 - 17.329: 99.8378% ( 1) 00:07:30.667 17.428 - 17.526: 99.8434% ( 1) 00:07:30.667 17.625 - 17.723: 99.8490% ( 1) 00:07:30.667 17.723 - 17.822: 99.8657% ( 3) 00:07:30.667 17.920 - 18.018: 99.8713% ( 1) 00:07:30.667 18.314 - 18.412: 99.8769% ( 1) 00:07:30.667 18.412 - 18.511: 99.8825% ( 1) 00:07:30.667 18.708 - 18.806: 99.8881% ( 1) 00:07:30.667 18.905 - 19.003: 99.8993% ( 2) 00:07:30.667 19.102 - 19.200: 99.9161% ( 3) 00:07:30.667 19.495 - 19.594: 99.9217% ( 1) 00:07:30.667 19.988 - 20.086: 99.9273% ( 1) 00:07:30.667 20.677 - 20.775: 99.9329% ( 1) 00:07:30.667 20.775 - 20.874: 99.9385% ( 1) 00:07:30.667 22.154 - 22.252: 99.9441% ( 1) 00:07:30.667 22.252 - 22.351: 99.9497% ( 1) 00:07:30.667 22.646 - 22.745: 99.9552% ( 1) 00:07:30.667 22.942 - 23.040: 99.9608% ( 1) 00:07:30.667 23.631 - 23.729: 99.9664% ( 1) 00:07:30.667 25.600 - 25.797: 99.9720% ( 1) 00:07:30.667 25.994 - 26.191: 99.9776% ( 1) 00:07:30.667 29.145 - 29.342: 99.9832% ( 1) 00:07:30.667 32.886 - 33.083: 99.9888% ( 1) 00:07:30.667 40.763 - 40.960: 99.9944% ( 1) 00:07:30.667 45.686 - 45.883: 100.0000% ( 1) 00:07:30.667 00:07:30.667 00:07:30.667 real 0m1.210s 00:07:30.667 user 0m1.068s 00:07:30.667 sys 0m0.096s 00:07:30.667 09:40:09 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.667 09:40:09 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:30.667 ************************************ 00:07:30.667 END TEST nvme_overhead 00:07:30.667 ************************************ 00:07:30.667 09:40:09 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:30.667 09:40:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:30.667 09:40:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.667 09:40:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.667 ************************************ 00:07:30.667 START TEST nvme_arbitration 00:07:30.667 ************************************ 00:07:30.667 09:40:09 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:33.963 Initializing NVMe Controllers 00:07:33.963 Attached to 0000:00:13.0 00:07:33.963 Attached to 0000:00:10.0 00:07:33.963 Attached to 0000:00:11.0 00:07:33.963 Attached to 0000:00:12.0 00:07:33.963 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:33.963 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:33.964 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:33.964 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:33.964 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:33.964 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:33.964 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:33.964 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:33.964 Initialization complete. Launching workers. 00:07:33.964 Starting thread on core 1 with urgent priority queue 00:07:33.964 Starting thread on core 2 with urgent priority queue 00:07:33.964 Starting thread on core 3 with urgent priority queue 00:07:33.964 Starting thread on core 0 with urgent priority queue 00:07:33.964 QEMU NVMe Ctrl (12343 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:33.964 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:33.964 QEMU NVMe Ctrl (12340 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:33.964 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:33.964 QEMU NVMe Ctrl (12341 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:07:33.964 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:33.964 ======================================================== 00:07:33.964 00:07:33.964 00:07:33.964 real 0m3.304s 00:07:33.964 user 0m9.249s 00:07:33.964 sys 0m0.114s 00:07:33.964 09:40:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.964 09:40:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:33.964 ************************************ 00:07:33.964 END TEST nvme_arbitration 00:07:33.964 ************************************ 00:07:33.964 09:40:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:33.964 09:40:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:33.964 09:40:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.964 09:40:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.964 ************************************ 00:07:33.964 START TEST nvme_single_aen 00:07:33.964 ************************************ 00:07:33.964 09:40:12 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:33.964 Asynchronous Event Request test 00:07:33.964 Attached to 0000:00:13.0 00:07:33.964 Attached to 0000:00:10.0 00:07:33.964 Attached to 0000:00:11.0 00:07:33.964 Attached to 0000:00:12.0 00:07:33.964 Reset controller to setup AER completions for this process 00:07:33.964 Registering asynchronous event callbacks... 00:07:33.964 Getting orig temperature thresholds of all controllers 00:07:33.964 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.964 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.964 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.964 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.964 Setting all controllers temperature threshold low to trigger AER 00:07:33.964 Waiting for all controllers temperature threshold to be set lower 00:07:33.964 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.964 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:33.964 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.964 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:33.964 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.964 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:33.964 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.964 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:33.964 Waiting for all controllers to trigger AER and reset threshold 00:07:33.964 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.964 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.964 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.964 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.964 Cleaning up... 00:07:33.964 00:07:33.964 real 0m0.211s 00:07:33.964 user 0m0.071s 00:07:33.964 sys 0m0.101s 00:07:33.964 ************************************ 00:07:33.964 09:40:12 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.964 09:40:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:33.964 END TEST nvme_single_aen 00:07:33.964 ************************************ 00:07:33.964 09:40:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:33.964 09:40:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.964 09:40:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.964 09:40:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.964 ************************************ 00:07:33.964 START TEST nvme_doorbell_aers 00:07:33.964 ************************************ 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:33.964 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:34.225 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:34.225 09:40:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:34.225 09:40:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:34.225 09:40:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:34.225 [2024-11-28 09:40:13.061978] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:07:44.203 Executing: test_write_invalid_db 00:07:44.203 Waiting for AER completion... 00:07:44.203 Failure: test_write_invalid_db 00:07:44.203 00:07:44.203 Executing: test_invalid_db_write_overflow_sq 00:07:44.203 Waiting for AER completion... 00:07:44.203 Failure: test_invalid_db_write_overflow_sq 00:07:44.203 00:07:44.203 Executing: test_invalid_db_write_overflow_cq 00:07:44.203 Waiting for AER completion... 00:07:44.203 Failure: test_invalid_db_write_overflow_cq 00:07:44.203 00:07:44.203 09:40:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:44.203 09:40:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:44.461 [2024-11-28 09:40:23.086981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:07:54.475 Executing: test_write_invalid_db 00:07:54.475 Waiting for AER completion... 00:07:54.475 Failure: test_write_invalid_db 00:07:54.475 00:07:54.475 Executing: test_invalid_db_write_overflow_sq 00:07:54.475 Waiting for AER completion... 00:07:54.475 Failure: test_invalid_db_write_overflow_sq 00:07:54.475 00:07:54.475 Executing: test_invalid_db_write_overflow_cq 00:07:54.475 Waiting for AER completion... 00:07:54.475 Failure: test_invalid_db_write_overflow_cq 00:07:54.475 00:07:54.475 09:40:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:54.475 09:40:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:54.475 [2024-11-28 09:40:33.116801] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:04.445 Executing: test_write_invalid_db 00:08:04.445 Waiting for AER completion... 00:08:04.445 Failure: test_write_invalid_db 00:08:04.445 00:08:04.445 Executing: test_invalid_db_write_overflow_sq 00:08:04.445 Waiting for AER completion... 00:08:04.445 Failure: test_invalid_db_write_overflow_sq 00:08:04.445 00:08:04.445 Executing: test_invalid_db_write_overflow_cq 00:08:04.445 Waiting for AER completion... 00:08:04.445 Failure: test_invalid_db_write_overflow_cq 00:08:04.445 00:08:04.445 09:40:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:04.445 09:40:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:04.445 [2024-11-28 09:40:43.159906] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.441 Executing: test_write_invalid_db 00:08:14.441 Waiting for AER completion... 00:08:14.441 Failure: test_write_invalid_db 00:08:14.441 00:08:14.441 Executing: test_invalid_db_write_overflow_sq 00:08:14.441 Waiting for AER completion... 00:08:14.441 Failure: test_invalid_db_write_overflow_sq 00:08:14.441 00:08:14.441 Executing: test_invalid_db_write_overflow_cq 00:08:14.441 Waiting for AER completion... 00:08:14.441 Failure: test_invalid_db_write_overflow_cq 00:08:14.441 00:08:14.441 00:08:14.441 real 0m40.193s 00:08:14.441 user 0m34.209s 00:08:14.441 sys 0m5.622s 00:08:14.441 09:40:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.441 09:40:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:14.441 ************************************ 00:08:14.441 END TEST nvme_doorbell_aers 00:08:14.441 ************************************ 00:08:14.441 09:40:53 nvme -- nvme/nvme.sh@97 -- # uname 00:08:14.441 09:40:53 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:14.441 09:40:53 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:14.442 09:40:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:14.442 09:40:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.442 09:40:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.442 ************************************ 00:08:14.442 START TEST nvme_multi_aen 00:08:14.442 ************************************ 00:08:14.442 09:40:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:14.442 [2024-11-28 09:40:53.207097] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.207168] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.207179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.208501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.208539] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.208548] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.209441] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.209469] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.209477] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.210363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.210389] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 [2024-11-28 09:40:53.210397] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63162) is not found. Dropping the request. 00:08:14.442 Child process pid: 63689 00:08:14.703 [Child] Asynchronous Event Request test 00:08:14.703 [Child] Attached to 0000:00:13.0 00:08:14.703 [Child] Attached to 0000:00:10.0 00:08:14.703 [Child] Attached to 0000:00:11.0 00:08:14.703 [Child] Attached to 0000:00:12.0 00:08:14.703 [Child] Registering asynchronous event callbacks... 00:08:14.703 [Child] Getting orig temperature thresholds of all controllers 00:08:14.703 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.703 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:14.704 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 [Child] Cleaning up... 00:08:14.704 Asynchronous Event Request test 00:08:14.704 Attached to 0000:00:13.0 00:08:14.704 Attached to 0000:00:10.0 00:08:14.704 Attached to 0000:00:11.0 00:08:14.704 Attached to 0000:00:12.0 00:08:14.704 Reset controller to setup AER completions for this process 00:08:14.704 Registering asynchronous event callbacks... 00:08:14.704 Getting orig temperature thresholds of all controllers 00:08:14.704 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.704 Setting all controllers temperature threshold low to trigger AER 00:08:14.704 Waiting for all controllers temperature threshold to be set lower 00:08:14.704 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:14.704 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:14.704 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:14.704 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.704 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:14.704 Waiting for all controllers to trigger AER and reset threshold 00:08:14.704 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.704 Cleaning up... 00:08:14.704 00:08:14.704 real 0m0.430s 00:08:14.704 user 0m0.144s 00:08:14.704 sys 0m0.181s 00:08:14.704 09:40:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.704 09:40:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:14.704 ************************************ 00:08:14.704 END TEST nvme_multi_aen 00:08:14.704 ************************************ 00:08:14.704 09:40:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:14.704 09:40:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:14.704 09:40:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.704 09:40:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.704 ************************************ 00:08:14.704 START TEST nvme_startup 00:08:14.704 ************************************ 00:08:14.704 09:40:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:14.963 Initializing NVMe Controllers 00:08:14.963 Attached to 0000:00:13.0 00:08:14.963 Attached to 0000:00:10.0 00:08:14.963 Attached to 0000:00:11.0 00:08:14.963 Attached to 0000:00:12.0 00:08:14.963 Initialization complete. 00:08:14.963 Time used:137212.375 (us). 00:08:14.963 00:08:14.963 real 0m0.194s 00:08:14.963 user 0m0.060s 00:08:14.963 sys 0m0.092s 00:08:14.963 09:40:53 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.963 09:40:53 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:14.963 ************************************ 00:08:14.963 END TEST nvme_startup 00:08:14.963 ************************************ 00:08:14.963 09:40:53 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:14.963 09:40:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.963 09:40:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.963 09:40:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.963 ************************************ 00:08:14.963 START TEST nvme_multi_secondary 00:08:14.963 ************************************ 00:08:14.963 09:40:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:14.963 09:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63739 00:08:14.963 09:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63740 00:08:14.963 09:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:14.963 09:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:14.963 09:40:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:18.245 Initializing NVMe Controllers 00:08:18.245 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.245 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.245 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.245 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.245 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:18.245 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:18.245 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:18.245 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:18.245 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:18.245 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:18.245 Initialization complete. Launching workers. 00:08:18.245 ======================================================== 00:08:18.245 Latency(us) 00:08:18.245 Device Information : IOPS MiB/s Average min max 00:08:18.245 PCIE (0000:00:13.0) NSID 1 from core 2: 3273.04 12.79 4887.65 1007.33 20212.77 00:08:18.245 PCIE (0000:00:10.0) NSID 1 from core 2: 3273.04 12.79 4886.67 908.69 21734.75 00:08:18.245 PCIE (0000:00:11.0) NSID 1 from core 2: 3273.04 12.79 4887.97 895.39 21811.11 00:08:18.245 PCIE (0000:00:12.0) NSID 1 from core 2: 3273.04 12.79 4888.14 930.15 16119.91 00:08:18.245 PCIE (0000:00:12.0) NSID 2 from core 2: 3273.04 12.79 4888.84 886.84 17403.52 00:08:18.245 PCIE (0000:00:12.0) NSID 3 from core 2: 3273.04 12.79 4888.40 934.06 19722.98 00:08:18.245 ======================================================== 00:08:18.245 Total : 19638.23 76.71 4887.95 886.84 21811.11 00:08:18.245 00:08:18.245 09:40:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63739 00:08:18.245 Initializing NVMe Controllers 00:08:18.245 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.245 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.245 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.245 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.245 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:18.246 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:18.246 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:18.246 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:18.246 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:18.246 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:18.246 Initialization complete. Launching workers. 00:08:18.246 ======================================================== 00:08:18.246 Latency(us) 00:08:18.246 Device Information : IOPS MiB/s Average min max 00:08:18.246 PCIE (0000:00:13.0) NSID 1 from core 1: 7685.14 30.02 2081.51 910.46 17246.23 00:08:18.246 PCIE (0000:00:10.0) NSID 1 from core 1: 7685.14 30.02 2080.62 983.12 18082.32 00:08:18.246 PCIE (0000:00:11.0) NSID 1 from core 1: 7685.14 30.02 2081.52 869.83 16320.41 00:08:18.246 PCIE (0000:00:12.0) NSID 1 from core 1: 7685.14 30.02 2081.47 881.45 14975.27 00:08:18.246 PCIE (0000:00:12.0) NSID 2 from core 1: 7685.14 30.02 2081.42 911.97 13722.48 00:08:18.246 PCIE (0000:00:12.0) NSID 3 from core 1: 7685.14 30.02 2081.40 945.38 15598.84 00:08:18.246 ======================================================== 00:08:18.246 Total : 46110.85 180.12 2081.32 869.83 18082.32 00:08:18.246 00:08:20.146 Initializing NVMe Controllers 00:08:20.146 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.146 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.146 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.146 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.146 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:20.146 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:20.146 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:20.146 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:20.146 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:20.146 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:20.146 Initialization complete. Launching workers. 00:08:20.146 ======================================================== 00:08:20.146 Latency(us) 00:08:20.146 Device Information : IOPS MiB/s Average min max 00:08:20.146 PCIE (0000:00:13.0) NSID 1 from core 0: 10961.63 42.82 1459.27 696.85 9618.58 00:08:20.146 PCIE (0000:00:10.0) NSID 1 from core 0: 10961.63 42.82 1458.41 673.38 9469.58 00:08:20.146 PCIE (0000:00:11.0) NSID 1 from core 0: 10961.63 42.82 1459.24 686.44 8491.35 00:08:20.146 PCIE (0000:00:12.0) NSID 1 from core 0: 10961.63 42.82 1459.22 632.04 8281.69 00:08:20.146 PCIE (0000:00:12.0) NSID 2 from core 0: 10961.63 42.82 1459.22 605.71 9759.40 00:08:20.146 PCIE (0000:00:12.0) NSID 3 from core 0: 10961.63 42.82 1459.21 583.57 8758.19 00:08:20.146 ======================================================== 00:08:20.146 Total : 65769.81 256.91 1459.10 583.57 9759.40 00:08:20.146 00:08:20.146 09:40:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63740 00:08:20.146 09:40:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63809 00:08:20.146 09:40:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:20.146 09:40:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63810 00:08:20.146 09:40:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:20.146 09:40:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:23.441 Initializing NVMe Controllers 00:08:23.441 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.441 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.441 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.441 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.441 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:23.441 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:23.441 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:23.441 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:23.441 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:23.441 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:23.441 Initialization complete. Launching workers. 00:08:23.441 ======================================================== 00:08:23.441 Latency(us) 00:08:23.441 Device Information : IOPS MiB/s Average min max 00:08:23.441 PCIE (0000:00:13.0) NSID 1 from core 1: 5356.62 20.92 2986.58 736.80 13571.20 00:08:23.441 PCIE (0000:00:10.0) NSID 1 from core 1: 5356.62 20.92 2986.40 721.56 13720.03 00:08:23.441 PCIE (0000:00:11.0) NSID 1 from core 1: 5356.62 20.92 2987.50 742.13 13975.70 00:08:23.441 PCIE (0000:00:12.0) NSID 1 from core 1: 5356.62 20.92 2987.56 737.90 15049.64 00:08:23.441 PCIE (0000:00:12.0) NSID 2 from core 1: 5356.62 20.92 2988.00 741.57 12969.24 00:08:23.441 PCIE (0000:00:12.0) NSID 3 from core 1: 5356.62 20.92 2988.37 735.05 13115.62 00:08:23.441 ======================================================== 00:08:23.441 Total : 32139.70 125.55 2987.40 721.56 15049.64 00:08:23.441 00:08:23.441 Initializing NVMe Controllers 00:08:23.441 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.441 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.441 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.441 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.441 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:23.441 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:23.441 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:23.441 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:23.441 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:23.441 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:23.441 Initialization complete. Launching workers. 00:08:23.441 ======================================================== 00:08:23.441 Latency(us) 00:08:23.441 Device Information : IOPS MiB/s Average min max 00:08:23.441 PCIE (0000:00:13.0) NSID 1 from core 0: 5143.29 20.09 3110.38 837.24 12862.02 00:08:23.441 PCIE (0000:00:10.0) NSID 1 from core 0: 5143.29 20.09 3110.17 822.18 12026.28 00:08:23.441 PCIE (0000:00:11.0) NSID 1 from core 0: 5143.29 20.09 3111.22 847.08 12252.09 00:08:23.441 PCIE (0000:00:12.0) NSID 1 from core 0: 5143.29 20.09 3111.17 828.96 12188.92 00:08:23.441 PCIE (0000:00:12.0) NSID 2 from core 0: 5143.29 20.09 3111.12 829.66 12332.52 00:08:23.441 PCIE (0000:00:12.0) NSID 3 from core 0: 5143.29 20.09 3111.08 838.43 13144.46 00:08:23.441 ======================================================== 00:08:23.441 Total : 30859.73 120.55 3110.85 822.18 13144.46 00:08:23.441 00:08:25.993 Initializing NVMe Controllers 00:08:25.993 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.993 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.993 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.993 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.993 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:25.993 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:25.993 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:25.993 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:25.993 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:25.993 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:25.993 Initialization complete. Launching workers. 00:08:25.993 ======================================================== 00:08:25.993 Latency(us) 00:08:25.993 Device Information : IOPS MiB/s Average min max 00:08:25.993 PCIE (0000:00:13.0) NSID 1 from core 2: 2095.20 8.18 7635.69 1041.70 41867.89 00:08:25.993 PCIE (0000:00:10.0) NSID 1 from core 2: 2095.20 8.18 7634.97 1033.29 36563.50 00:08:25.993 PCIE (0000:00:11.0) NSID 1 from core 2: 2095.20 8.18 7636.59 1032.52 42677.21 00:08:25.993 PCIE (0000:00:12.0) NSID 1 from core 2: 2095.20 8.18 7636.06 1012.72 30969.56 00:08:25.993 PCIE (0000:00:12.0) NSID 2 from core 2: 2095.20 8.18 7635.54 1016.64 32029.04 00:08:25.993 PCIE (0000:00:12.0) NSID 3 from core 2: 2095.20 8.18 7636.17 1013.15 34204.26 00:08:25.993 ======================================================== 00:08:25.993 Total : 12571.19 49.11 7635.84 1012.72 42677.21 00:08:25.993 00:08:25.993 09:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63809 00:08:25.993 09:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63810 00:08:25.993 00:08:25.993 real 0m10.655s 00:08:25.993 user 0m18.348s 00:08:25.993 sys 0m0.635s 00:08:25.993 09:41:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.993 09:41:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:25.993 ************************************ 00:08:25.993 END TEST nvme_multi_secondary 00:08:25.993 ************************************ 00:08:25.993 09:41:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:25.993 09:41:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:25.993 09:41:04 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62770 ]] 00:08:25.993 09:41:04 nvme -- common/autotest_common.sh@1094 -- # kill 62770 00:08:25.993 09:41:04 nvme -- common/autotest_common.sh@1095 -- # wait 62770 00:08:25.993 [2024-11-28 09:41:04.424706] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.424774] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.424800] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.424818] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.427367] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.427431] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.427450] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.427467] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.430023] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.430090] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.430109] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.430125] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.432751] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.993 [2024-11-28 09:41:04.432819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.994 [2024-11-28 09:41:04.432838] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.994 [2024-11-28 09:41:04.432854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63688) is not found. Dropping the request. 00:08:25.994 09:41:04 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:25.994 09:41:04 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:25.994 09:41:04 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:25.994 09:41:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.994 09:41:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.994 09:41:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.994 ************************************ 00:08:25.994 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:25.994 ************************************ 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:25.994 * Looking for test storage... 00:08:25.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.994 --rc genhtml_branch_coverage=1 00:08:25.994 --rc genhtml_function_coverage=1 00:08:25.994 --rc genhtml_legend=1 00:08:25.994 --rc geninfo_all_blocks=1 00:08:25.994 --rc geninfo_unexecuted_blocks=1 00:08:25.994 00:08:25.994 ' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.994 --rc genhtml_branch_coverage=1 00:08:25.994 --rc genhtml_function_coverage=1 00:08:25.994 --rc genhtml_legend=1 00:08:25.994 --rc geninfo_all_blocks=1 00:08:25.994 --rc geninfo_unexecuted_blocks=1 00:08:25.994 00:08:25.994 ' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.994 --rc genhtml_branch_coverage=1 00:08:25.994 --rc genhtml_function_coverage=1 00:08:25.994 --rc genhtml_legend=1 00:08:25.994 --rc geninfo_all_blocks=1 00:08:25.994 --rc geninfo_unexecuted_blocks=1 00:08:25.994 00:08:25.994 ' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.994 --rc genhtml_branch_coverage=1 00:08:25.994 --rc genhtml_function_coverage=1 00:08:25.994 --rc genhtml_legend=1 00:08:25.994 --rc geninfo_all_blocks=1 00:08:25.994 --rc geninfo_unexecuted_blocks=1 00:08:25.994 00:08:25.994 ' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63971 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63971 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63971 ']' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.994 09:41:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:25.994 [2024-11-28 09:41:04.859969] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:25.994 [2024-11-28 09:41:04.860089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63971 ] 00:08:26.256 [2024-11-28 09:41:05.031700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.256 [2024-11-28 09:41:05.131957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.256 [2024-11-28 09:41:05.132192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.256 [2024-11-28 09:41:05.132396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.256 [2024-11-28 09:41:05.132485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 nvme0n1 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_3vhqe.txt 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.197 true 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732786865 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63994 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:27.197 09:41:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:29.111 [2024-11-28 09:41:07.831282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:29.111 [2024-11-28 09:41:07.831579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:29.111 [2024-11-28 09:41:07.831603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:29.111 [2024-11-28 09:41:07.831617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:29.111 [2024-11-28 09:41:07.834781] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.111 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63994 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63994 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63994 00:08:29.111 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_3vhqe.txt 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_3vhqe.txt 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63971 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63971 ']' 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63971 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63971 00:08:29.112 killing process with pid 63971 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63971' 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63971 00:08:29.112 09:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63971 00:08:31.024 09:41:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:31.024 09:41:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:31.024 00:08:31.024 real 0m4.881s 00:08:31.024 user 0m17.297s 00:08:31.024 sys 0m0.522s 00:08:31.024 ************************************ 00:08:31.024 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:31.024 ************************************ 00:08:31.024 09:41:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.024 09:41:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:31.024 09:41:09 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:31.024 09:41:09 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:31.024 09:41:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.024 09:41:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.024 09:41:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.024 ************************************ 00:08:31.024 START TEST nvme_fio 00:08:31.024 ************************************ 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:31.024 09:41:09 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:31.024 09:41:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:31.284 09:41:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:31.284 09:41:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:31.284 09:41:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:31.562 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:31.562 fio-3.35 00:08:31.562 Starting 1 thread 00:08:38.168 00:08:38.168 test: (groupid=0, jobs=1): err= 0: pid=64134: Thu Nov 28 09:41:16 2024 00:08:38.168 read: IOPS=20.4k, BW=79.8MiB/s (83.7MB/s)(160MiB/2001msec) 00:08:38.168 slat (nsec): min=3378, max=82254, avg=5270.66, stdev=2458.56 00:08:38.169 clat (usec): min=211, max=10058, avg=3110.40, stdev=1124.54 00:08:38.169 lat (usec): min=216, max=10068, avg=3115.67, stdev=1125.65 00:08:38.169 clat percentiles (usec): 00:08:38.169 | 1.00th=[ 1876], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:38.169 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2835], 00:08:38.169 | 70.00th=[ 3097], 80.00th=[ 3687], 90.00th=[ 4883], 95.00th=[ 5669], 00:08:38.169 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8717], 99.95th=[ 8979], 00:08:38.169 | 99.99th=[ 9634] 00:08:38.169 bw ( KiB/s): min=79760, max=87888, per=100.00%, avg=83200.00, stdev=4205.26, samples=3 00:08:38.169 iops : min=19940, max=21972, avg=20800.00, stdev=1051.32, samples=3 00:08:38.169 write: IOPS=20.4k, BW=79.6MiB/s (83.5MB/s)(159MiB/2001msec); 0 zone resets 00:08:38.169 slat (usec): min=3, max=1889, avg= 5.47, stdev= 9.65 00:08:38.169 clat (usec): min=194, max=10133, avg=3136.12, stdev=1126.12 00:08:38.169 lat (usec): min=199, max=10161, avg=3141.59, stdev=1127.28 00:08:38.169 clat percentiles (usec): 00:08:38.169 | 1.00th=[ 1876], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:08:38.169 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2868], 00:08:38.169 | 70.00th=[ 3130], 80.00th=[ 3720], 90.00th=[ 4883], 95.00th=[ 5669], 00:08:38.169 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 8717], 99.95th=[ 9241], 00:08:38.169 | 99.99th=[ 9896] 00:08:38.169 bw ( KiB/s): min=79936, max=88056, per=100.00%, avg=83218.67, stdev=4277.42, samples=3 00:08:38.169 iops : min=19984, max=22014, avg=20804.67, stdev=1069.36, samples=3 00:08:38.169 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.03% 00:08:38.169 lat (msec) : 2=1.45%, 4=81.36%, 10=17.11%, 20=0.01% 00:08:38.169 cpu : usr=98.90%, sys=0.10%, ctx=3, majf=0, minf=607 00:08:38.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:38.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:38.169 issued rwts: total=40898,40797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:38.169 00:08:38.169 Run status group 0 (all jobs): 00:08:38.169 READ: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=160MiB (168MB), run=2001-2001msec 00:08:38.169 WRITE: bw=79.6MiB/s (83.5MB/s), 79.6MiB/s-79.6MiB/s (83.5MB/s-83.5MB/s), io=159MiB (167MB), run=2001-2001msec 00:08:38.169 ----------------------------------------------------- 00:08:38.169 Suppressions used: 00:08:38.169 count bytes template 00:08:38.169 1 32 /usr/src/fio/parse.c 00:08:38.169 1 8 libtcmalloc_minimal.so 00:08:38.169 ----------------------------------------------------- 00:08:38.169 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:38.169 09:41:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:38.169 09:41:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:38.430 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:38.430 fio-3.35 00:08:38.430 Starting 1 thread 00:08:45.013 00:08:45.013 test: (groupid=0, jobs=1): err= 0: pid=64199: Thu Nov 28 09:41:23 2024 00:08:45.013 read: IOPS=18.8k, BW=73.5MiB/s (77.1MB/s)(147MiB/2001msec) 00:08:45.013 slat (nsec): min=4274, max=76171, avg=5521.55, stdev=2832.55 00:08:45.013 clat (usec): min=224, max=9952, avg=3373.14, stdev=1223.67 00:08:45.013 lat (usec): min=229, max=10006, avg=3378.66, stdev=1224.96 00:08:45.013 clat percentiles (usec): 00:08:45.013 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:08:45.013 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3097], 00:08:45.013 | 70.00th=[ 3392], 80.00th=[ 4080], 90.00th=[ 5276], 95.00th=[ 6194], 00:08:45.013 | 99.00th=[ 7439], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 8979], 00:08:45.013 | 99.99th=[ 9896] 00:08:45.013 bw ( KiB/s): min=70360, max=80424, per=100.00%, avg=77058.67, stdev=5801.24, samples=3 00:08:45.013 iops : min=17590, max=20106, avg=19264.67, stdev=1450.31, samples=3 00:08:45.013 write: IOPS=18.8k, BW=73.6MiB/s (77.2MB/s)(147MiB/2001msec); 0 zone resets 00:08:45.013 slat (usec): min=4, max=208, avg= 5.64, stdev= 3.02 00:08:45.013 clat (usec): min=315, max=9887, avg=3398.98, stdev=1218.74 00:08:45.013 lat (usec): min=320, max=9898, avg=3404.62, stdev=1220.00 00:08:45.013 clat percentiles (usec): 00:08:45.013 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:08:45.013 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3130], 00:08:45.013 | 70.00th=[ 3425], 80.00th=[ 4080], 90.00th=[ 5342], 95.00th=[ 6194], 00:08:45.013 | 99.00th=[ 7439], 99.50th=[ 7767], 99.90th=[ 8717], 99.95th=[ 9241], 00:08:45.013 | 99.99th=[ 9765] 00:08:45.013 bw ( KiB/s): min=70336, max=80776, per=100.00%, avg=77165.33, stdev=5917.62, samples=3 00:08:45.013 iops : min=17584, max=20194, avg=19291.33, stdev=1479.41, samples=3 00:08:45.013 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:08:45.013 lat (msec) : 2=0.75%, 4=78.25%, 10=20.97% 00:08:45.013 cpu : usr=98.95%, sys=0.05%, ctx=4, majf=0, minf=606 00:08:45.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:45.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.013 issued rwts: total=37673,37691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.013 00:08:45.013 Run status group 0 (all jobs): 00:08:45.013 READ: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=147MiB (154MB), run=2001-2001msec 00:08:45.013 WRITE: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=147MiB (154MB), run=2001-2001msec 00:08:45.013 ----------------------------------------------------- 00:08:45.013 Suppressions used: 00:08:45.013 count bytes template 00:08:45.013 1 32 /usr/src/fio/parse.c 00:08:45.013 1 8 libtcmalloc_minimal.so 00:08:45.013 ----------------------------------------------------- 00:08:45.013 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:45.013 09:41:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:45.013 09:41:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:45.274 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.274 fio-3.35 00:08:45.274 Starting 1 thread 00:08:51.858 00:08:51.858 test: (groupid=0, jobs=1): err= 0: pid=64264: Thu Nov 28 09:41:29 2024 00:08:51.858 read: IOPS=17.3k, BW=67.7MiB/s (70.9MB/s)(135MiB/2001msec) 00:08:51.858 slat (nsec): min=4244, max=84725, avg=6091.73, stdev=3322.32 00:08:51.858 clat (usec): min=526, max=12657, avg=3650.46, stdev=1293.32 00:08:51.858 lat (usec): min=531, max=12725, avg=3656.55, stdev=1294.74 00:08:51.858 clat percentiles (usec): 00:08:51.858 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:51.858 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3163], 60.00th=[ 3425], 00:08:51.858 | 70.00th=[ 3916], 80.00th=[ 4686], 90.00th=[ 5604], 95.00th=[ 6390], 00:08:51.858 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[10028], 00:08:51.858 | 99.99th=[12518] 00:08:51.858 bw ( KiB/s): min=62752, max=77944, per=100.00%, avg=71130.67, stdev=7716.02, samples=3 00:08:51.858 iops : min=15688, max=19486, avg=17782.67, stdev=1929.00, samples=3 00:08:51.858 write: IOPS=17.3k, BW=67.7MiB/s (71.0MB/s)(135MiB/2001msec); 0 zone resets 00:08:51.858 slat (usec): min=4, max=179, avg= 6.19, stdev= 3.42 00:08:51.858 clat (usec): min=535, max=12539, avg=3706.32, stdev=1292.82 00:08:51.858 lat (usec): min=540, max=12554, avg=3712.51, stdev=1294.19 00:08:51.858 clat percentiles (usec): 00:08:51.858 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2704], 00:08:51.858 | 30.00th=[ 2868], 40.00th=[ 3032], 50.00th=[ 3228], 60.00th=[ 3490], 00:08:51.858 | 70.00th=[ 4015], 80.00th=[ 4752], 90.00th=[ 5604], 95.00th=[ 6390], 00:08:51.858 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9503], 99.95th=[10028], 00:08:51.858 | 99.99th=[11600] 00:08:51.858 bw ( KiB/s): min=63112, max=77728, per=100.00%, avg=71042.67, stdev=7387.15, samples=3 00:08:51.858 iops : min=15778, max=19432, avg=17760.67, stdev=1846.79, samples=3 00:08:51.858 lat (usec) : 750=0.01% 00:08:51.858 lat (msec) : 2=0.30%, 4=70.17%, 10=29.46%, 20=0.06% 00:08:51.858 cpu : usr=98.80%, sys=0.10%, ctx=4, majf=0, minf=606 00:08:51.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:51.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.858 issued rwts: total=34656,34686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.858 00:08:51.858 Run status group 0 (all jobs): 00:08:51.858 READ: bw=67.7MiB/s (70.9MB/s), 67.7MiB/s-67.7MiB/s (70.9MB/s-70.9MB/s), io=135MiB (142MB), run=2001-2001msec 00:08:51.858 WRITE: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=135MiB (142MB), run=2001-2001msec 00:08:51.858 ----------------------------------------------------- 00:08:51.858 Suppressions used: 00:08:51.858 count bytes template 00:08:51.858 1 32 /usr/src/fio/parse.c 00:08:51.858 1 8 libtcmalloc_minimal.so 00:08:51.858 ----------------------------------------------------- 00:08:51.858 00:08:51.858 09:41:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:51.858 09:41:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.858 09:41:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:51.858 09:41:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.858 09:41:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:51.858 09:41:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.858 09:41:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.858 09:41:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.858 09:41:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:51.858 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.858 fio-3.35 00:08:51.858 Starting 1 thread 00:09:01.848 00:09:01.848 test: (groupid=0, jobs=1): err= 0: pid=64321: Thu Nov 28 09:41:38 2024 00:09:01.848 read: IOPS=18.3k, BW=71.6MiB/s (75.1MB/s)(143MiB/2001msec) 00:09:01.848 slat (nsec): min=4219, max=88288, avg=5617.80, stdev=3087.47 00:09:01.848 clat (usec): min=286, max=11126, avg=3458.75, stdev=1307.11 00:09:01.848 lat (usec): min=291, max=11131, avg=3464.37, stdev=1308.40 00:09:01.848 clat percentiles (usec): 00:09:01.848 | 1.00th=[ 1975], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2474], 00:09:01.848 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2933], 60.00th=[ 3228], 00:09:01.848 | 70.00th=[ 3752], 80.00th=[ 4555], 90.00th=[ 5473], 95.00th=[ 6259], 00:09:01.848 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8717], 99.95th=[ 9503], 00:09:01.848 | 99.99th=[10552] 00:09:01.848 bw ( KiB/s): min=66746, max=80760, per=100.00%, avg=73574.00, stdev=7013.86, samples=3 00:09:01.848 iops : min=16686, max=20190, avg=18393.33, stdev=1753.71, samples=3 00:09:01.848 write: IOPS=18.3k, BW=71.6MiB/s (75.1MB/s)(143MiB/2001msec); 0 zone resets 00:09:01.848 slat (nsec): min=4263, max=72978, avg=5706.86, stdev=3145.25 00:09:01.848 clat (usec): min=205, max=11216, avg=3496.05, stdev=1312.00 00:09:01.848 lat (usec): min=209, max=11232, avg=3501.76, stdev=1313.31 00:09:01.848 clat percentiles (usec): 00:09:01.848 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:09:01.848 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3261], 00:09:01.848 | 70.00th=[ 3818], 80.00th=[ 4555], 90.00th=[ 5538], 95.00th=[ 6259], 00:09:01.848 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8848], 99.95th=[ 9503], 00:09:01.848 | 99.99th=[11076] 00:09:01.848 bw ( KiB/s): min=66658, max=80544, per=100.00%, avg=73488.67, stdev=6945.73, samples=3 00:09:01.848 iops : min=16664, max=20136, avg=18372.00, stdev=1736.68, samples=3 00:09:01.848 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:01.848 lat (msec) : 2=1.00%, 4=71.79%, 10=27.13%, 20=0.03% 00:09:01.848 cpu : usr=98.75%, sys=0.20%, ctx=4, majf=0, minf=604 00:09:01.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:01.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.848 issued rwts: total=36686,36680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.848 00:09:01.848 Run status group 0 (all jobs): 00:09:01.848 READ: bw=71.6MiB/s (75.1MB/s), 71.6MiB/s-71.6MiB/s (75.1MB/s-75.1MB/s), io=143MiB (150MB), run=2001-2001msec 00:09:01.848 WRITE: bw=71.6MiB/s (75.1MB/s), 71.6MiB/s-71.6MiB/s (75.1MB/s-75.1MB/s), io=143MiB (150MB), run=2001-2001msec 00:09:01.848 ----------------------------------------------------- 00:09:01.848 Suppressions used: 00:09:01.848 count bytes template 00:09:01.848 1 32 /usr/src/fio/parse.c 00:09:01.848 1 8 libtcmalloc_minimal.so 00:09:01.848 ----------------------------------------------------- 00:09:01.848 00:09:01.848 09:41:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:01.848 09:41:39 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:01.848 00:09:01.848 real 0m29.676s 00:09:01.848 user 0m16.705s 00:09:01.848 sys 0m24.368s 00:09:01.848 09:41:39 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.848 ************************************ 00:09:01.848 END TEST nvme_fio 00:09:01.848 ************************************ 00:09:01.848 09:41:39 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:01.848 ************************************ 00:09:01.848 END TEST nvme 00:09:01.848 ************************************ 00:09:01.848 00:09:01.848 real 1m38.928s 00:09:01.848 user 3m36.900s 00:09:01.848 sys 0m34.981s 00:09:01.848 09:41:39 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.848 09:41:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.848 09:41:39 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:01.848 09:41:39 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:01.848 09:41:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.848 09:41:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.848 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:09:01.848 ************************************ 00:09:01.848 START TEST nvme_scc 00:09:01.848 ************************************ 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:01.848 * Looking for test storage... 00:09:01.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.848 09:41:39 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.848 --rc genhtml_branch_coverage=1 00:09:01.848 --rc genhtml_function_coverage=1 00:09:01.848 --rc genhtml_legend=1 00:09:01.848 --rc geninfo_all_blocks=1 00:09:01.848 --rc geninfo_unexecuted_blocks=1 00:09:01.848 00:09:01.848 ' 00:09:01.848 09:41:39 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.848 --rc genhtml_branch_coverage=1 00:09:01.848 --rc genhtml_function_coverage=1 00:09:01.848 --rc genhtml_legend=1 00:09:01.848 --rc geninfo_all_blocks=1 00:09:01.849 --rc geninfo_unexecuted_blocks=1 00:09:01.849 00:09:01.849 ' 00:09:01.849 09:41:39 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.849 --rc genhtml_branch_coverage=1 00:09:01.849 --rc genhtml_function_coverage=1 00:09:01.849 --rc genhtml_legend=1 00:09:01.849 --rc geninfo_all_blocks=1 00:09:01.849 --rc geninfo_unexecuted_blocks=1 00:09:01.849 00:09:01.849 ' 00:09:01.849 09:41:39 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.849 --rc genhtml_branch_coverage=1 00:09:01.849 --rc genhtml_function_coverage=1 00:09:01.849 --rc genhtml_legend=1 00:09:01.849 --rc geninfo_all_blocks=1 00:09:01.849 --rc geninfo_unexecuted_blocks=1 00:09:01.849 00:09:01.849 ' 00:09:01.849 09:41:39 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.849 09:41:39 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.849 09:41:39 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.849 09:41:39 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.849 09:41:39 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.849 09:41:39 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.849 09:41:39 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.849 09:41:39 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.849 09:41:39 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:01.849 09:41:39 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:01.849 09:41:39 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:01.849 09:41:39 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.849 09:41:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:01.849 09:41:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:01.849 09:41:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:01.849 09:41:39 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:01.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:01.849 Waiting for block devices as requested 00:09:01.849 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.849 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.849 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.849 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.175 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:07.175 09:41:45 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:07.175 09:41:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.175 09:41:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:07.175 09:41:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.175 09:41:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.175 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.176 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.177 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.178 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.179 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.180 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:07.181 09:41:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:07.182 09:41:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.182 09:41:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:07.182 09:41:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.182 09:41:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.182 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:07.183 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.184 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.185 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:07.186 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:07.187 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:07.188 09:41:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.188 09:41:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:07.188 09:41:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.188 09:41:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:07.188 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.189 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.190 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.191 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.192 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.193 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:07.194 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.195 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.196 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.197 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.198 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:07.199 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.200 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:07.201 09:41:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.201 09:41:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:07.201 09:41:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.201 09:41:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:07.201 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.202 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.203 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:07.204 09:41:45 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:07.205 09:41:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:07.205 09:41:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:07.205 09:41:45 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:07.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.038 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.038 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.038 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.038 09:41:46 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:08.038 09:41:46 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:08.038 09:41:46 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.038 09:41:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:08.038 ************************************ 00:09:08.038 START TEST nvme_simple_copy 00:09:08.038 ************************************ 00:09:08.038 09:41:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:08.611 Initializing NVMe Controllers 00:09:08.611 Attaching to 0000:00:10.0 00:09:08.611 Controller supports SCC. Attached to 0000:00:10.0 00:09:08.611 Namespace ID: 1 size: 6GB 00:09:08.611 Initialization complete. 00:09:08.611 00:09:08.611 Controller QEMU NVMe Ctrl (12340 ) 00:09:08.611 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:08.611 Namespace Block Size:4096 00:09:08.611 Writing LBAs 0 to 63 with Random Data 00:09:08.611 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:08.611 LBAs matching Written Data: 64 00:09:08.611 ************************************ 00:09:08.611 END TEST nvme_simple_copy 00:09:08.611 ************************************ 00:09:08.611 00:09:08.611 real 0m0.277s 00:09:08.611 user 0m0.110s 00:09:08.611 sys 0m0.064s 00:09:08.611 09:41:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.611 09:41:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:08.611 ************************************ 00:09:08.611 END TEST nvme_scc 00:09:08.611 ************************************ 00:09:08.611 00:09:08.611 real 0m7.952s 00:09:08.611 user 0m1.206s 00:09:08.611 sys 0m1.402s 00:09:08.611 09:41:47 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.611 09:41:47 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:08.611 09:41:47 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:08.612 09:41:47 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:08.612 09:41:47 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:08.612 09:41:47 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:08.612 09:41:47 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:08.612 09:41:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.612 09:41:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.612 09:41:47 -- common/autotest_common.sh@10 -- # set +x 00:09:08.612 ************************************ 00:09:08.612 START TEST nvme_fdp 00:09:08.612 ************************************ 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:08.612 * Looking for test storage... 00:09:08.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.612 --rc genhtml_branch_coverage=1 00:09:08.612 --rc genhtml_function_coverage=1 00:09:08.612 --rc genhtml_legend=1 00:09:08.612 --rc geninfo_all_blocks=1 00:09:08.612 --rc geninfo_unexecuted_blocks=1 00:09:08.612 00:09:08.612 ' 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.612 --rc genhtml_branch_coverage=1 00:09:08.612 --rc genhtml_function_coverage=1 00:09:08.612 --rc genhtml_legend=1 00:09:08.612 --rc geninfo_all_blocks=1 00:09:08.612 --rc geninfo_unexecuted_blocks=1 00:09:08.612 00:09:08.612 ' 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.612 --rc genhtml_branch_coverage=1 00:09:08.612 --rc genhtml_function_coverage=1 00:09:08.612 --rc genhtml_legend=1 00:09:08.612 --rc geninfo_all_blocks=1 00:09:08.612 --rc geninfo_unexecuted_blocks=1 00:09:08.612 00:09:08.612 ' 00:09:08.612 09:41:47 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.612 --rc genhtml_branch_coverage=1 00:09:08.612 --rc genhtml_function_coverage=1 00:09:08.612 --rc genhtml_legend=1 00:09:08.612 --rc geninfo_all_blocks=1 00:09:08.612 --rc geninfo_unexecuted_blocks=1 00:09:08.612 00:09:08.612 ' 00:09:08.612 09:41:47 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.612 09:41:47 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.612 09:41:47 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.612 09:41:47 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.612 09:41:47 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.612 09:41:47 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:08.612 09:41:47 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:08.612 09:41:47 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:08.612 09:41:47 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.612 09:41:47 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:08.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.135 Waiting for block devices as requested 00:09:09.136 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.136 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.397 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.397 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.700 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:14.700 09:41:53 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:14.700 09:41:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.700 09:41:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:14.700 09:41:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.700 09:41:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.700 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:14.701 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.702 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:14.703 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.704 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.705 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.706 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:14.707 09:41:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.707 09:41:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:14.707 09:41:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.707 09:41:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.707 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:14.708 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.709 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.710 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.711 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:14.712 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:14.713 09:41:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.713 09:41:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:14.713 09:41:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.713 09:41:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.713 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:14.714 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.715 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.716 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.717 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:14.718 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.719 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:14.986 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.987 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.988 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.989 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.990 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.991 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:14.992 09:41:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.992 09:41:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:14.992 09:41:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.992 09:41:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.992 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.993 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:14.994 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:14.995 09:41:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:14.995 09:41:53 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:14.996 09:41:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:14.996 09:41:53 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:14.996 09:41:53 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:15.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:16.143 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.143 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.143 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.143 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.143 09:41:54 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:16.143 09:41:54 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:16.143 09:41:54 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.143 09:41:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:16.143 ************************************ 00:09:16.143 START TEST nvme_flexible_data_placement 00:09:16.143 ************************************ 00:09:16.143 09:41:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:16.404 Initializing NVMe Controllers 00:09:16.404 Attaching to 0000:00:13.0 00:09:16.404 Controller supports FDP Attached to 0000:00:13.0 00:09:16.404 Namespace ID: 1 Endurance Group ID: 1 00:09:16.404 Initialization complete. 00:09:16.404 00:09:16.404 ================================== 00:09:16.404 == FDP tests for Namespace: #01 == 00:09:16.404 ================================== 00:09:16.404 00:09:16.404 Get Feature: FDP: 00:09:16.404 ================= 00:09:16.404 Enabled: Yes 00:09:16.404 FDP configuration Index: 0 00:09:16.404 00:09:16.404 FDP configurations log page 00:09:16.404 =========================== 00:09:16.404 Number of FDP configurations: 1 00:09:16.404 Version: 0 00:09:16.404 Size: 112 00:09:16.404 FDP Configuration Descriptor: 0 00:09:16.404 Descriptor Size: 96 00:09:16.404 Reclaim Group Identifier format: 2 00:09:16.404 FDP Volatile Write Cache: Not Present 00:09:16.404 FDP Configuration: Valid 00:09:16.404 Vendor Specific Size: 0 00:09:16.404 Number of Reclaim Groups: 2 00:09:16.404 Number of Recalim Unit Handles: 8 00:09:16.404 Max Placement Identifiers: 128 00:09:16.404 Number of Namespaces Suppprted: 256 00:09:16.404 Reclaim unit Nominal Size: 6000000 bytes 00:09:16.404 Estimated Reclaim Unit Time Limit: Not Reported 00:09:16.404 RUH Desc #000: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #001: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #002: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #003: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #004: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #005: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #006: RUH Type: Initially Isolated 00:09:16.404 RUH Desc #007: RUH Type: Initially Isolated 00:09:16.404 00:09:16.404 FDP reclaim unit handle usage log page 00:09:16.404 ====================================== 00:09:16.404 Number of Reclaim Unit Handles: 8 00:09:16.404 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:16.404 RUH Usage Desc #001: RUH Attributes: Unused 00:09:16.404 RUH Usage Desc #002: RUH Attributes: Unused 00:09:16.404 RUH Usage Desc #003: RUH Attributes: Unused 00:09:16.404 RUH Usage Desc #004: RUH Attributes: Unused 00:09:16.404 RUH Usage Desc #005: RUH Attributes: Unused 00:09:16.404 RUH Usage Desc #006: RUH Attributes: Unused 00:09:16.404 RUH Usage Desc #007: RUH Attributes: Unused 00:09:16.404 00:09:16.404 FDP statistics log page 00:09:16.404 ======================= 00:09:16.404 Host bytes with metadata written: 1037709312 00:09:16.404 Media bytes with metadata written: 1037819904 00:09:16.404 Media bytes erased: 0 00:09:16.404 00:09:16.404 FDP Reclaim unit handle status 00:09:16.404 ============================== 00:09:16.404 Number of RUHS descriptors: 2 00:09:16.404 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000425d 00:09:16.404 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:16.404 00:09:16.404 FDP write on placement id: 0 success 00:09:16.404 00:09:16.404 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:16.404 00:09:16.404 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:16.404 00:09:16.404 Get Feature: FDP Events for Placement handle: #0 00:09:16.404 ======================== 00:09:16.404 Number of FDP Events: 6 00:09:16.404 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:16.404 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:16.404 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:16.404 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:16.404 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:16.404 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:16.404 00:09:16.404 FDP events log page 00:09:16.404 =================== 00:09:16.404 Number of FDP events: 1 00:09:16.404 FDP Event #0: 00:09:16.404 Event Type: RU Not Written to Capacity 00:09:16.404 Placement Identifier: Valid 00:09:16.404 NSID: Valid 00:09:16.404 Location: Valid 00:09:16.404 Placement Identifier: 0 00:09:16.404 Event Timestamp: 6 00:09:16.404 Namespace Identifier: 1 00:09:16.404 Reclaim Group Identifier: 0 00:09:16.404 Reclaim Unit Handle Identifier: 0 00:09:16.404 00:09:16.404 FDP test passed 00:09:16.404 00:09:16.404 real 0m0.242s 00:09:16.404 user 0m0.075s 00:09:16.404 sys 0m0.064s 00:09:16.404 09:41:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.404 09:41:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:16.404 ************************************ 00:09:16.404 END TEST nvme_flexible_data_placement 00:09:16.404 ************************************ 00:09:16.404 00:09:16.404 real 0m7.836s 00:09:16.404 user 0m1.167s 00:09:16.404 sys 0m1.331s 00:09:16.404 09:41:55 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.404 ************************************ 00:09:16.404 09:41:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:16.404 END TEST nvme_fdp 00:09:16.404 ************************************ 00:09:16.404 09:41:55 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:16.404 09:41:55 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:16.404 09:41:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.404 09:41:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.404 09:41:55 -- common/autotest_common.sh@10 -- # set +x 00:09:16.404 ************************************ 00:09:16.404 START TEST nvme_rpc 00:09:16.404 ************************************ 00:09:16.404 09:41:55 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:16.404 * Looking for test storage... 00:09:16.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:16.404 09:41:55 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.404 09:41:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.404 09:41:55 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.665 09:41:55 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.665 --rc genhtml_branch_coverage=1 00:09:16.665 --rc genhtml_function_coverage=1 00:09:16.665 --rc genhtml_legend=1 00:09:16.665 --rc geninfo_all_blocks=1 00:09:16.665 --rc geninfo_unexecuted_blocks=1 00:09:16.665 00:09:16.665 ' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.665 --rc genhtml_branch_coverage=1 00:09:16.665 --rc genhtml_function_coverage=1 00:09:16.665 --rc genhtml_legend=1 00:09:16.665 --rc geninfo_all_blocks=1 00:09:16.665 --rc geninfo_unexecuted_blocks=1 00:09:16.665 00:09:16.665 ' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.665 --rc genhtml_branch_coverage=1 00:09:16.665 --rc genhtml_function_coverage=1 00:09:16.665 --rc genhtml_legend=1 00:09:16.665 --rc geninfo_all_blocks=1 00:09:16.665 --rc geninfo_unexecuted_blocks=1 00:09:16.665 00:09:16.665 ' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.665 --rc genhtml_branch_coverage=1 00:09:16.665 --rc genhtml_function_coverage=1 00:09:16.665 --rc genhtml_legend=1 00:09:16.665 --rc geninfo_all_blocks=1 00:09:16.665 --rc geninfo_unexecuted_blocks=1 00:09:16.665 00:09:16.665 ' 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:16.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65713 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:16.665 09:41:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65713 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65713 ']' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.665 09:41:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.665 [2024-11-28 09:41:55.481928] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:16.665 [2024-11-28 09:41:55.482197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65713 ] 00:09:16.926 [2024-11-28 09:41:55.643747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.927 [2024-11-28 09:41:55.739361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.927 [2024-11-28 09:41:55.739445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.499 09:41:56 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.499 09:41:56 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:17.499 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:17.760 Nvme0n1 00:09:17.760 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:17.760 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:18.020 request: 00:09:18.020 { 00:09:18.020 "bdev_name": "Nvme0n1", 00:09:18.020 "filename": "non_existing_file", 00:09:18.020 "method": "bdev_nvme_apply_firmware", 00:09:18.020 "req_id": 1 00:09:18.020 } 00:09:18.020 Got JSON-RPC error response 00:09:18.020 response: 00:09:18.020 { 00:09:18.020 "code": -32603, 00:09:18.020 "message": "open file failed." 00:09:18.020 } 00:09:18.020 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:18.020 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:18.020 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:18.279 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:18.279 09:41:56 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65713 00:09:18.279 09:41:56 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65713 ']' 00:09:18.279 09:41:56 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65713 00:09:18.279 09:41:56 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:18.279 09:41:56 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.279 09:41:56 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65713 00:09:18.279 killing process with pid 65713 00:09:18.279 09:41:57 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.279 09:41:57 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.279 09:41:57 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65713' 00:09:18.279 09:41:57 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65713 00:09:18.279 09:41:57 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65713 00:09:19.653 ************************************ 00:09:19.653 END TEST nvme_rpc 00:09:19.653 ************************************ 00:09:19.653 00:09:19.653 real 0m2.930s 00:09:19.653 user 0m5.551s 00:09:19.653 sys 0m0.496s 00:09:19.653 09:41:58 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.653 09:41:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.653 09:41:58 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:19.653 09:41:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.653 09:41:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.653 09:41:58 -- common/autotest_common.sh@10 -- # set +x 00:09:19.653 ************************************ 00:09:19.653 START TEST nvme_rpc_timeouts 00:09:19.653 ************************************ 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:19.653 * Looking for test storage... 00:09:19.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.653 09:41:58 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.653 --rc genhtml_branch_coverage=1 00:09:19.653 --rc genhtml_function_coverage=1 00:09:19.653 --rc genhtml_legend=1 00:09:19.653 --rc geninfo_all_blocks=1 00:09:19.653 --rc geninfo_unexecuted_blocks=1 00:09:19.653 00:09:19.653 ' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.653 --rc genhtml_branch_coverage=1 00:09:19.653 --rc genhtml_function_coverage=1 00:09:19.653 --rc genhtml_legend=1 00:09:19.653 --rc geninfo_all_blocks=1 00:09:19.653 --rc geninfo_unexecuted_blocks=1 00:09:19.653 00:09:19.653 ' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.653 --rc genhtml_branch_coverage=1 00:09:19.653 --rc genhtml_function_coverage=1 00:09:19.653 --rc genhtml_legend=1 00:09:19.653 --rc geninfo_all_blocks=1 00:09:19.653 --rc geninfo_unexecuted_blocks=1 00:09:19.653 00:09:19.653 ' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.653 --rc genhtml_branch_coverage=1 00:09:19.653 --rc genhtml_function_coverage=1 00:09:19.653 --rc genhtml_legend=1 00:09:19.653 --rc geninfo_all_blocks=1 00:09:19.653 --rc geninfo_unexecuted_blocks=1 00:09:19.653 00:09:19.653 ' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65772 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65772 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65804 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:19.653 09:41:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65804 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65804 ']' 00:09:19.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.653 09:41:58 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:19.653 [2024-11-28 09:41:58.408031] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:19.653 [2024-11-28 09:41:58.408347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65804 ] 00:09:19.915 [2024-11-28 09:41:58.571118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.915 [2024-11-28 09:41:58.668381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.915 [2024-11-28 09:41:58.668485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.487 09:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.487 Checking default timeout settings: 00:09:20.487 09:41:59 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:20.487 09:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:20.487 09:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:20.747 Making settings changes with rpc: 00:09:20.747 09:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:20.747 09:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:21.008 Check default vs. modified settings: 00:09:21.008 09:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:21.008 09:41:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65772 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65772 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:21.267 Setting action_on_timeout is changed as expected. 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65772 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65772 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:21.267 Setting timeout_us is changed as expected. 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65772 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65772 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:21.267 Setting timeout_admin_us is changed as expected. 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:21.267 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65772 /tmp/settings_modified_65772 00:09:21.268 09:42:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65804 00:09:21.268 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65804 ']' 00:09:21.268 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65804 00:09:21.268 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:21.268 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.268 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65804 00:09:21.525 killing process with pid 65804 00:09:21.525 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.525 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.525 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65804' 00:09:21.525 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65804 00:09:21.525 09:42:00 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65804 00:09:22.463 RPC TIMEOUT SETTING TEST PASSED. 00:09:22.463 09:42:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:22.463 ************************************ 00:09:22.463 END TEST nvme_rpc_timeouts 00:09:22.463 ************************************ 00:09:22.463 00:09:22.463 real 0m3.123s 00:09:22.463 user 0m6.065s 00:09:22.463 sys 0m0.476s 00:09:22.463 09:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.463 09:42:01 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:22.723 09:42:01 -- spdk/autotest.sh@239 -- # uname -s 00:09:22.723 09:42:01 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:22.723 09:42:01 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:22.723 09:42:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.724 09:42:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.724 09:42:01 -- common/autotest_common.sh@10 -- # set +x 00:09:22.724 ************************************ 00:09:22.724 START TEST sw_hotplug 00:09:22.724 ************************************ 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:22.724 * Looking for test storage... 00:09:22.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.724 09:42:01 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.724 --rc genhtml_branch_coverage=1 00:09:22.724 --rc genhtml_function_coverage=1 00:09:22.724 --rc genhtml_legend=1 00:09:22.724 --rc geninfo_all_blocks=1 00:09:22.724 --rc geninfo_unexecuted_blocks=1 00:09:22.724 00:09:22.724 ' 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.724 --rc genhtml_branch_coverage=1 00:09:22.724 --rc genhtml_function_coverage=1 00:09:22.724 --rc genhtml_legend=1 00:09:22.724 --rc geninfo_all_blocks=1 00:09:22.724 --rc geninfo_unexecuted_blocks=1 00:09:22.724 00:09:22.724 ' 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.724 --rc genhtml_branch_coverage=1 00:09:22.724 --rc genhtml_function_coverage=1 00:09:22.724 --rc genhtml_legend=1 00:09:22.724 --rc geninfo_all_blocks=1 00:09:22.724 --rc geninfo_unexecuted_blocks=1 00:09:22.724 00:09:22.724 ' 00:09:22.724 09:42:01 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.724 --rc genhtml_branch_coverage=1 00:09:22.724 --rc genhtml_function_coverage=1 00:09:22.724 --rc genhtml_legend=1 00:09:22.724 --rc geninfo_all_blocks=1 00:09:22.724 --rc geninfo_unexecuted_blocks=1 00:09:22.724 00:09:22.724 ' 00:09:22.724 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.248 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:23.248 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:23.248 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:23.248 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:23.248 09:42:01 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:23.248 09:42:01 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:23.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.828 Waiting for block devices as requested 00:09:23.828 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.828 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.829 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.829 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.129 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:29.129 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:29.129 09:42:07 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.390 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:29.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.390 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:29.651 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:29.912 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.912 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.912 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:29.912 09:42:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66661 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:30.172 09:42:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:30.172 09:42:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:30.172 09:42:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:30.172 09:42:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:30.172 09:42:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:30.172 09:42:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:30.172 Initializing NVMe Controllers 00:09:30.172 Attaching to 0000:00:10.0 00:09:30.172 Attaching to 0000:00:11.0 00:09:30.172 Attached to 0000:00:10.0 00:09:30.172 Attached to 0000:00:11.0 00:09:30.172 Initialization complete. Starting I/O... 00:09:30.172 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:30.172 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:30.172 00:09:31.559 QEMU NVMe Ctrl (12340 ): 2696 I/Os completed (+2696) 00:09:31.559 QEMU NVMe Ctrl (12341 ): 2697 I/Os completed (+2697) 00:09:31.559 00:09:32.503 QEMU NVMe Ctrl (12340 ): 6036 I/Os completed (+3340) 00:09:32.503 QEMU NVMe Ctrl (12341 ): 6037 I/Os completed (+3340) 00:09:32.503 00:09:33.447 QEMU NVMe Ctrl (12340 ): 9336 I/Os completed (+3300) 00:09:33.447 QEMU NVMe Ctrl (12341 ): 9337 I/Os completed (+3300) 00:09:33.447 00:09:34.385 QEMU NVMe Ctrl (12340 ): 12774 I/Os completed (+3438) 00:09:34.385 QEMU NVMe Ctrl (12341 ): 12783 I/Os completed (+3446) 00:09:34.385 00:09:35.319 QEMU NVMe Ctrl (12340 ): 16518 I/Os completed (+3744) 00:09:35.319 QEMU NVMe Ctrl (12341 ): 16533 I/Os completed (+3750) 00:09:35.319 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:36.253 [2024-11-28 09:42:14.860890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:36.253 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:36.253 [2024-11-28 09:42:14.861891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.862006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.862037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.862097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:36.253 [2024-11-28 09:42:14.863686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.863780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.863807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.863858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:36.253 EAL: Scan for (pci) bus failed. 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:36.253 [2024-11-28 09:42:14.882648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:36.253 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:36.253 [2024-11-28 09:42:14.883571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.883661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.883695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.883742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:36.253 [2024-11-28 09:42:14.885180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.885229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.885244] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 [2024-11-28 09:42:14.885257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:36.253 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/class 00:09:36.253 EAL: Scan for (pci) bus failed. 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:36.253 09:42:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:36.253 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:36.253 00:09:36.253 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:36.253 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:36.253 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:36.253 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:36.253 Attaching to 0000:00:10.0 00:09:36.253 Attached to 0000:00:10.0 00:09:36.253 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:36.510 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:36.510 09:42:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:36.510 Attaching to 0000:00:11.0 00:09:36.510 Attached to 0000:00:11.0 00:09:37.446 QEMU NVMe Ctrl (12340 ): 3690 I/Os completed (+3690) 00:09:37.446 QEMU NVMe Ctrl (12341 ): 3418 I/Os completed (+3418) 00:09:37.446 00:09:38.390 QEMU NVMe Ctrl (12340 ): 6978 I/Os completed (+3288) 00:09:38.390 QEMU NVMe Ctrl (12341 ): 6706 I/Os completed (+3288) 00:09:38.390 00:09:39.331 QEMU NVMe Ctrl (12340 ): 10314 I/Os completed (+3336) 00:09:39.331 QEMU NVMe Ctrl (12341 ): 10048 I/Os completed (+3342) 00:09:39.331 00:09:40.264 QEMU NVMe Ctrl (12340 ): 14036 I/Os completed (+3722) 00:09:40.264 QEMU NVMe Ctrl (12341 ): 13779 I/Os completed (+3731) 00:09:40.264 00:09:41.199 QEMU NVMe Ctrl (12340 ): 17772 I/Os completed (+3736) 00:09:41.199 QEMU NVMe Ctrl (12341 ): 17522 I/Os completed (+3743) 00:09:41.199 00:09:42.570 QEMU NVMe Ctrl (12340 ): 21504 I/Os completed (+3732) 00:09:42.570 QEMU NVMe Ctrl (12341 ): 21284 I/Os completed (+3762) 00:09:42.570 00:09:43.510 QEMU NVMe Ctrl (12340 ): 24956 I/Os completed (+3452) 00:09:43.510 QEMU NVMe Ctrl (12341 ): 24696 I/Os completed (+3412) 00:09:43.510 00:09:44.451 QEMU NVMe Ctrl (12340 ): 27656 I/Os completed (+2700) 00:09:44.451 QEMU NVMe Ctrl (12341 ): 27400 I/Os completed (+2704) 00:09:44.451 00:09:45.412 QEMU NVMe Ctrl (12340 ): 30328 I/Os completed (+2672) 00:09:45.412 QEMU NVMe Ctrl (12341 ): 30084 I/Os completed (+2684) 00:09:45.412 00:09:46.348 QEMU NVMe Ctrl (12340 ): 33348 I/Os completed (+3020) 00:09:46.348 QEMU NVMe Ctrl (12341 ): 33104 I/Os completed (+3020) 00:09:46.348 00:09:47.281 QEMU NVMe Ctrl (12340 ): 37005 I/Os completed (+3657) 00:09:47.281 QEMU NVMe Ctrl (12341 ): 36772 I/Os completed (+3668) 00:09:47.281 00:09:48.220 QEMU NVMe Ctrl (12340 ): 40072 I/Os completed (+3067) 00:09:48.220 QEMU NVMe Ctrl (12341 ): 39926 I/Os completed (+3154) 00:09:48.220 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.483 [2024-11-28 09:42:27.140305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:48.483 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:48.483 [2024-11-28 09:42:27.143629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.143701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.143722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.143741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:48.483 [2024-11-28 09:42:27.145951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.146020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.146036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.146051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:48.483 [2024-11-28 09:42:27.162807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:48.483 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:48.483 [2024-11-28 09:42:27.164074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.164253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.164300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.164329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:48.483 [2024-11-28 09:42:27.166353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.166437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.166458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 [2024-11-28 09:42:27.166475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:48.483 EAL: Cannot open sysfs resource 00:09:48.483 EAL: pci_scan_one(): cannot parse resource 00:09:48.483 EAL: Scan for (pci) bus failed. 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:48.483 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:48.744 Attaching to 0000:00:10.0 00:09:48.744 Attached to 0000:00:10.0 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:48.744 09:42:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:48.744 Attaching to 0000:00:11.0 00:09:48.744 Attached to 0000:00:11.0 00:09:49.317 QEMU NVMe Ctrl (12340 ): 1728 I/Os completed (+1728) 00:09:49.317 QEMU NVMe Ctrl (12341 ): 1500 I/Os completed (+1500) 00:09:49.317 00:09:50.262 QEMU NVMe Ctrl (12340 ): 4465 I/Os completed (+2737) 00:09:50.262 QEMU NVMe Ctrl (12341 ): 4249 I/Os completed (+2749) 00:09:50.262 00:09:51.206 QEMU NVMe Ctrl (12340 ): 7137 I/Os completed (+2672) 00:09:51.206 QEMU NVMe Ctrl (12341 ): 6943 I/Os completed (+2694) 00:09:51.206 00:09:52.593 QEMU NVMe Ctrl (12340 ): 9849 I/Os completed (+2712) 00:09:52.593 QEMU NVMe Ctrl (12341 ): 9660 I/Os completed (+2717) 00:09:52.593 00:09:53.165 QEMU NVMe Ctrl (12340 ): 12549 I/Os completed (+2700) 00:09:53.165 QEMU NVMe Ctrl (12341 ): 12361 I/Os completed (+2701) 00:09:53.165 00:09:54.550 QEMU NVMe Ctrl (12340 ): 15273 I/Os completed (+2724) 00:09:54.550 QEMU NVMe Ctrl (12341 ): 15093 I/Os completed (+2732) 00:09:54.550 00:09:55.490 QEMU NVMe Ctrl (12340 ): 17965 I/Os completed (+2692) 00:09:55.490 QEMU NVMe Ctrl (12341 ): 17791 I/Os completed (+2698) 00:09:55.490 00:09:56.453 QEMU NVMe Ctrl (12340 ): 20773 I/Os completed (+2808) 00:09:56.454 QEMU NVMe Ctrl (12341 ): 20611 I/Os completed (+2820) 00:09:56.454 00:09:57.390 QEMU NVMe Ctrl (12340 ): 24135 I/Os completed (+3362) 00:09:57.390 QEMU NVMe Ctrl (12341 ): 23989 I/Os completed (+3378) 00:09:57.390 00:09:58.326 QEMU NVMe Ctrl (12340 ): 27823 I/Os completed (+3688) 00:09:58.326 QEMU NVMe Ctrl (12341 ): 27679 I/Os completed (+3690) 00:09:58.326 00:09:59.261 QEMU NVMe Ctrl (12340 ): 31529 I/Os completed (+3706) 00:09:59.261 QEMU NVMe Ctrl (12341 ): 31385 I/Os completed (+3706) 00:09:59.261 00:10:00.201 QEMU NVMe Ctrl (12340 ): 34510 I/Os completed (+2981) 00:10:00.201 QEMU NVMe Ctrl (12341 ): 34405 I/Os completed (+3020) 00:10:00.201 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.775 [2024-11-28 09:42:39.464218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.775 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.775 [2024-11-28 09:42:39.465816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.465936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.465972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.466004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.775 [2024-11-28 09:42:39.468283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.468470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.468511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.468654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.775 [2024-11-28 09:42:39.490087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.775 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.775 [2024-11-28 09:42:39.491803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.491978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.492021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.492081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.775 [2024-11-28 09:42:39.494177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.494344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.494389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 [2024-11-28 09:42:39.494419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.775 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:00.775 EAL: Scan for (pci) bus failed. 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.775 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:01.036 Attaching to 0000:00:10.0 00:10:01.036 Attached to 0000:00:10.0 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.036 09:42:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:01.036 Attaching to 0000:00:11.0 00:10:01.036 Attached to 0000:00:11.0 00:10:01.036 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:01.036 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:01.036 [2024-11-28 09:42:39.797477] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:13.280 09:42:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:13.280 09:42:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.280 09:42:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.93 00:10:13.280 09:42:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.93 00:10:13.280 09:42:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:13.280 09:42:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:10:13.280 09:42:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:10:13.280 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 09:42:51 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66661 00:10:19.870 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66661) - No such process 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66661 00:10:19.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67209 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67209 00:10:19.870 09:42:57 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67209 ']' 00:10:19.870 09:42:57 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.870 09:42:57 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.870 09:42:57 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:19.870 09:42:57 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.870 09:42:57 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.870 09:42:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.870 [2024-11-28 09:42:57.888910] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:19.870 [2024-11-28 09:42:57.889345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67209 ] 00:10:19.870 [2024-11-28 09:42:58.049634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.870 [2024-11-28 09:42:58.164660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:20.131 09:42:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:20.131 09:42:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:26.700 09:43:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.700 09:43:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:26.700 09:43:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:26.700 09:43:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:26.700 [2024-11-28 09:43:04.960710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:26.701 [2024-11-28 09:43:04.962065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.701 [2024-11-28 09:43:04.962106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.701 [2024-11-28 09:43:04.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.701 [2024-11-28 09:43:04.962147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.701 [2024-11-28 09:43:04.962169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.701 [2024-11-28 09:43:04.962184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.701 [2024-11-28 09:43:04.962197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.701 [2024-11-28 09:43:04.962208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.701 [2024-11-28 09:43:04.962218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.701 [2024-11-28 09:43:04.962235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.701 [2024-11-28 09:43:04.962247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.701 [2024-11-28 09:43:04.962259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:26.701 09:43:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.701 09:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:26.701 09:43:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:26.701 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:26.958 [2024-11-28 09:43:05.660705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:26.959 [2024-11-28 09:43:05.661902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.959 [2024-11-28 09:43:05.661936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.959 [2024-11-28 09:43:05.661952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.959 [2024-11-28 09:43:05.661971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.959 [2024-11-28 09:43:05.661984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.959 [2024-11-28 09:43:05.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.959 [2024-11-28 09:43:05.662008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.959 [2024-11-28 09:43:05.662018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.959 [2024-11-28 09:43:05.662031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.959 [2024-11-28 09:43:05.662043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.959 [2024-11-28 09:43:05.662056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:26.959 [2024-11-28 09:43:05.662067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.217 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:27.217 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:27.217 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:27.217 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:27.217 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:27.217 09:43:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.217 09:43:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.217 09:43:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.217 09:43:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.217 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:27.217 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:27.217 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.217 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.217 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.478 09:43:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.700 09:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.700 09:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.700 09:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.700 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:39.701 09:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.701 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:39.701 09:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:39.701 [2024-11-28 09:43:18.360914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:39.701 [2024-11-28 09:43:18.362099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.701 [2024-11-28 09:43:18.362135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.701 [2024-11-28 09:43:18.362147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.701 [2024-11-28 09:43:18.362173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.701 [2024-11-28 09:43:18.362181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.701 [2024-11-28 09:43:18.362189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.701 [2024-11-28 09:43:18.362196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.701 [2024-11-28 09:43:18.362204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.701 [2024-11-28 09:43:18.362211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.702 [2024-11-28 09:43:18.362219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.702 [2024-11-28 09:43:18.362225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.702 [2024-11-28 09:43:18.362233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.702 09:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.702 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:39.702 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:40.270 [2024-11-28 09:43:18.860910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:40.270 [2024-11-28 09:43:18.862025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.270 [2024-11-28 09:43:18.862056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.270 [2024-11-28 09:43:18.862068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.270 [2024-11-28 09:43:18.862081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.270 [2024-11-28 09:43:18.862090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.270 [2024-11-28 09:43:18.862097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.270 [2024-11-28 09:43:18.862105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.270 [2024-11-28 09:43:18.862112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.270 [2024-11-28 09:43:18.862119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.270 [2024-11-28 09:43:18.862127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.270 [2024-11-28 09:43:18.862134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.270 [2024-11-28 09:43:18.862140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.270 09:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.270 09:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.270 09:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.270 09:43:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:40.270 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:40.270 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.270 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.270 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.270 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:40.529 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:40.529 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.529 09:43:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.735 09:43:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.735 09:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.735 09:43:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.735 09:43:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.735 09:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.735 [2024-11-28 09:43:31.261112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:52.735 [2024-11-28 09:43:31.262289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.735 [2024-11-28 09:43:31.262321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.735 [2024-11-28 09:43:31.262331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.735 [2024-11-28 09:43:31.262348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.735 [2024-11-28 09:43:31.262355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.735 [2024-11-28 09:43:31.262366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.735 [2024-11-28 09:43:31.262373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.735 [2024-11-28 09:43:31.262381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.735 [2024-11-28 09:43:31.262388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.735 [2024-11-28 09:43:31.262397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.735 [2024-11-28 09:43:31.262403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.735 [2024-11-28 09:43:31.262411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.735 09:43:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:52.735 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:52.994 [2024-11-28 09:43:31.661110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:52.994 [2024-11-28 09:43:31.662245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.994 [2024-11-28 09:43:31.662274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.994 [2024-11-28 09:43:31.662285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.994 [2024-11-28 09:43:31.662297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.994 [2024-11-28 09:43:31.662306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.994 [2024-11-28 09:43:31.662312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.994 [2024-11-28 09:43:31.662321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.994 [2024-11-28 09:43:31.662327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.994 [2024-11-28 09:43:31.662337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.994 [2024-11-28 09:43:31.662344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.994 [2024-11-28 09:43:31.662352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:52.994 [2024-11-28 09:43:31.662358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.994 09:43:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.994 09:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.994 09:43:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.994 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.253 09:43:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:53.253 09:43:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:53.253 09:43:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.253 09:43:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:11:05.489 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:05.489 09:43:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:05.489 09:43:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:12.051 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:12.051 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.051 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.052 09:43:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.052 09:43:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.052 09:43:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.052 [2024-11-28 09:43:50.206422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:12.052 [2024-11-28 09:43:50.207303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.207336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.207347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.207364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.207372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.207380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.207387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.207394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.207401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.207409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.207415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.207426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.606419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:12.052 [2024-11-28 09:43:50.607267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.607294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.607305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.607316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.607327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.607334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.607343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.607349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.607357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 [2024-11-28 09:43:50.607364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.052 [2024-11-28 09:43:50.607372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.052 [2024-11-28 09:43:50.607378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.052 09:43:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.052 09:43:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.052 09:43:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.052 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.312 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.312 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.312 09:43:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.513 09:44:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.513 09:44:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.513 09:44:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.513 09:44:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.513 09:44:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.513 09:44:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.513 09:44:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.513 09:44:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.513 09:44:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.513 09:44:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.513 09:44:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.513 09:44:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:24.513 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.513 [2024-11-28 09:44:03.106632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.513 [2024-11-28 09:44:03.109229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.513 [2024-11-28 09:44:03.109269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.513 [2024-11-28 09:44:03.109280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.513 [2024-11-28 09:44:03.109296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.513 [2024-11-28 09:44:03.109303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.513 [2024-11-28 09:44:03.109312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.513 [2024-11-28 09:44:03.109319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.513 [2024-11-28 09:44:03.109327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.513 [2024-11-28 09:44:03.109333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.513 [2024-11-28 09:44:03.109341] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.513 [2024-11-28 09:44:03.109348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.514 [2024-11-28 09:44:03.109355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.772 [2024-11-28 09:44:03.506635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.772 [2024-11-28 09:44:03.507476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.772 [2024-11-28 09:44:03.507504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.772 [2024-11-28 09:44:03.507514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.772 [2024-11-28 09:44:03.507525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.772 [2024-11-28 09:44:03.507536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.772 [2024-11-28 09:44:03.507543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.772 [2024-11-28 09:44:03.507552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.772 [2024-11-28 09:44:03.507558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.772 [2024-11-28 09:44:03.507566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.772 [2024-11-28 09:44:03.507573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.772 [2024-11-28 09:44:03.507582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.772 [2024-11-28 09:44:03.507589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.772 09:44:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.772 09:44:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.772 09:44:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.772 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.031 09:44:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.230 09:44:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.230 09:44:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.230 09:44:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.230 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.230 [2024-11-28 09:44:15.906828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:37.230 [2024-11-28 09:44:15.908045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.230 [2024-11-28 09:44:15.908077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.230 [2024-11-28 09:44:15.908087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.230 [2024-11-28 09:44:15.908103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.230 [2024-11-28 09:44:15.908109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.230 [2024-11-28 09:44:15.908117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.230 [2024-11-28 09:44:15.908125] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.230 [2024-11-28 09:44:15.908136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.230 [2024-11-28 09:44:15.908142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.230 [2024-11-28 09:44:15.908150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.231 [2024-11-28 09:44:15.908167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.231 [2024-11-28 09:44:15.908174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.231 09:44:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.231 09:44:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.231 09:44:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:37.231 09:44:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:37.489 [2024-11-28 09:44:16.306828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:37.489 [2024-11-28 09:44:16.307690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.489 [2024-11-28 09:44:16.307720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.489 [2024-11-28 09:44:16.307731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.489 [2024-11-28 09:44:16.307744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.489 [2024-11-28 09:44:16.307752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.489 [2024-11-28 09:44:16.307758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.489 [2024-11-28 09:44:16.307767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.489 [2024-11-28 09:44:16.307774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.489 [2024-11-28 09:44:16.307781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.489 [2024-11-28 09:44:16.307788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.489 [2024-11-28 09:44:16.307799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.489 [2024-11-28 09:44:16.307805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.748 09:44:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.748 09:44:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.748 09:44:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.748 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.006 09:44:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.65 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.65 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:11:50.210 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:50.210 09:44:28 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67209 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67209 ']' 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67209 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67209 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.210 killing process with pid 67209 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67209' 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67209 00:11:50.210 09:44:28 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67209 00:11:51.148 09:44:29 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:51.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:51.979 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.979 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.979 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.979 00:11:51.979 real 2m29.434s 00:11:51.979 user 1m50.939s 00:11:51.979 sys 0m17.031s 00:11:51.979 09:44:30 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.979 09:44:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.979 ************************************ 00:11:51.979 END TEST sw_hotplug 00:11:51.979 ************************************ 00:11:51.979 09:44:30 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:51.979 09:44:30 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.979 09:44:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.979 09:44:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.979 09:44:30 -- common/autotest_common.sh@10 -- # set +x 00:11:52.243 ************************************ 00:11:52.243 START TEST nvme_xnvme 00:11:52.243 ************************************ 00:11:52.243 09:44:30 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:52.243 * Looking for test storage... 00:11:52.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:52.243 09:44:30 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.243 09:44:30 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.243 09:44:30 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.243 09:44:31 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.243 --rc genhtml_branch_coverage=1 00:11:52.243 --rc genhtml_function_coverage=1 00:11:52.243 --rc genhtml_legend=1 00:11:52.243 --rc geninfo_all_blocks=1 00:11:52.243 --rc geninfo_unexecuted_blocks=1 00:11:52.243 00:11:52.243 ' 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.243 --rc genhtml_branch_coverage=1 00:11:52.243 --rc genhtml_function_coverage=1 00:11:52.243 --rc genhtml_legend=1 00:11:52.243 --rc geninfo_all_blocks=1 00:11:52.243 --rc geninfo_unexecuted_blocks=1 00:11:52.243 00:11:52.243 ' 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.243 --rc genhtml_branch_coverage=1 00:11:52.243 --rc genhtml_function_coverage=1 00:11:52.243 --rc genhtml_legend=1 00:11:52.243 --rc geninfo_all_blocks=1 00:11:52.243 --rc geninfo_unexecuted_blocks=1 00:11:52.243 00:11:52.243 ' 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.243 --rc genhtml_branch_coverage=1 00:11:52.243 --rc genhtml_function_coverage=1 00:11:52.243 --rc genhtml_legend=1 00:11:52.243 --rc geninfo_all_blocks=1 00:11:52.243 --rc geninfo_unexecuted_blocks=1 00:11:52.243 00:11:52.243 ' 00:11:52.243 09:44:31 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:52.243 09:44:31 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:52.243 09:44:31 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:52.243 09:44:31 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:52.244 09:44:31 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:52.244 09:44:31 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:52.244 #define SPDK_CONFIG_H 00:11:52.244 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:52.244 #define SPDK_CONFIG_APPS 1 00:11:52.244 #define SPDK_CONFIG_ARCH native 00:11:52.244 #define SPDK_CONFIG_ASAN 1 00:11:52.244 #undef SPDK_CONFIG_AVAHI 00:11:52.244 #undef SPDK_CONFIG_CET 00:11:52.244 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:52.244 #define SPDK_CONFIG_COVERAGE 1 00:11:52.244 #define SPDK_CONFIG_CROSS_PREFIX 00:11:52.244 #undef SPDK_CONFIG_CRYPTO 00:11:52.244 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:52.244 #undef SPDK_CONFIG_CUSTOMOCF 00:11:52.244 #undef SPDK_CONFIG_DAOS 00:11:52.244 #define SPDK_CONFIG_DAOS_DIR 00:11:52.244 #define SPDK_CONFIG_DEBUG 1 00:11:52.244 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:52.244 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:52.244 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:52.244 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:52.244 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:52.244 #undef SPDK_CONFIG_DPDK_UADK 00:11:52.244 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:52.244 #define SPDK_CONFIG_EXAMPLES 1 00:11:52.244 #undef SPDK_CONFIG_FC 00:11:52.244 #define SPDK_CONFIG_FC_PATH 00:11:52.244 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:52.244 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:52.244 #define SPDK_CONFIG_FSDEV 1 00:11:52.244 #undef SPDK_CONFIG_FUSE 00:11:52.244 #undef SPDK_CONFIG_FUZZER 00:11:52.244 #define SPDK_CONFIG_FUZZER_LIB 00:11:52.244 #undef SPDK_CONFIG_GOLANG 00:11:52.244 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:52.244 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:52.244 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:52.244 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:52.244 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:52.244 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:52.244 #undef SPDK_CONFIG_HAVE_LZ4 00:11:52.244 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:52.244 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:52.244 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:52.244 #define SPDK_CONFIG_IDXD 1 00:11:52.244 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:52.244 #undef SPDK_CONFIG_IPSEC_MB 00:11:52.244 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:52.244 #define SPDK_CONFIG_ISAL 1 00:11:52.244 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:52.244 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:52.244 #define SPDK_CONFIG_LIBDIR 00:11:52.244 #undef SPDK_CONFIG_LTO 00:11:52.244 #define SPDK_CONFIG_MAX_LCORES 128 00:11:52.244 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:52.244 #define SPDK_CONFIG_NVME_CUSE 1 00:11:52.244 #undef SPDK_CONFIG_OCF 00:11:52.244 #define SPDK_CONFIG_OCF_PATH 00:11:52.244 #define SPDK_CONFIG_OPENSSL_PATH 00:11:52.244 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:52.244 #define SPDK_CONFIG_PGO_DIR 00:11:52.244 #undef SPDK_CONFIG_PGO_USE 00:11:52.244 #define SPDK_CONFIG_PREFIX /usr/local 00:11:52.244 #undef SPDK_CONFIG_RAID5F 00:11:52.244 #undef SPDK_CONFIG_RBD 00:11:52.244 #define SPDK_CONFIG_RDMA 1 00:11:52.244 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:52.244 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:52.244 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:52.244 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:52.244 #define SPDK_CONFIG_SHARED 1 00:11:52.244 #undef SPDK_CONFIG_SMA 00:11:52.244 #define SPDK_CONFIG_TESTS 1 00:11:52.244 #undef SPDK_CONFIG_TSAN 00:11:52.244 #define SPDK_CONFIG_UBLK 1 00:11:52.244 #define SPDK_CONFIG_UBSAN 1 00:11:52.244 #undef SPDK_CONFIG_UNIT_TESTS 00:11:52.244 #undef SPDK_CONFIG_URING 00:11:52.244 #define SPDK_CONFIG_URING_PATH 00:11:52.244 #undef SPDK_CONFIG_URING_ZNS 00:11:52.244 #undef SPDK_CONFIG_USDT 00:11:52.244 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:52.244 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:52.244 #undef SPDK_CONFIG_VFIO_USER 00:11:52.244 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:52.244 #define SPDK_CONFIG_VHOST 1 00:11:52.244 #define SPDK_CONFIG_VIRTIO 1 00:11:52.244 #undef SPDK_CONFIG_VTUNE 00:11:52.244 #define SPDK_CONFIG_VTUNE_DIR 00:11:52.244 #define SPDK_CONFIG_WERROR 1 00:11:52.244 #define SPDK_CONFIG_WPDK_DIR 00:11:52.244 #define SPDK_CONFIG_XNVME 1 00:11:52.244 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:52.244 09:44:31 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:52.244 09:44:31 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.244 09:44:31 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.244 09:44:31 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.244 09:44:31 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.244 09:44:31 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.245 09:44:31 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.245 09:44:31 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.245 09:44:31 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.245 09:44:31 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:52.245 09:44:31 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:52.245 09:44:31 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:52.245 09:44:31 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68558 ]] 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68558 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.AGUPJY 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.AGUPJY/tests/xnvme /tmp/spdk.AGUPJY 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13942599680 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5625712640 00:11:52.246 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13942599680 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5625712640 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98813976576 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=888803328 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:52.247 * Looking for test storage... 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13942599680 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:52.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:52.247 09:44:31 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.510 --rc genhtml_branch_coverage=1 00:11:52.510 --rc genhtml_function_coverage=1 00:11:52.510 --rc genhtml_legend=1 00:11:52.510 --rc geninfo_all_blocks=1 00:11:52.510 --rc geninfo_unexecuted_blocks=1 00:11:52.510 00:11:52.510 ' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.510 --rc genhtml_branch_coverage=1 00:11:52.510 --rc genhtml_function_coverage=1 00:11:52.510 --rc genhtml_legend=1 00:11:52.510 --rc geninfo_all_blocks=1 00:11:52.510 --rc geninfo_unexecuted_blocks=1 00:11:52.510 00:11:52.510 ' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.510 --rc genhtml_branch_coverage=1 00:11:52.510 --rc genhtml_function_coverage=1 00:11:52.510 --rc genhtml_legend=1 00:11:52.510 --rc geninfo_all_blocks=1 00:11:52.510 --rc geninfo_unexecuted_blocks=1 00:11:52.510 00:11:52.510 ' 00:11:52.510 09:44:31 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.510 --rc genhtml_branch_coverage=1 00:11:52.510 --rc genhtml_function_coverage=1 00:11:52.510 --rc genhtml_legend=1 00:11:52.510 --rc geninfo_all_blocks=1 00:11:52.510 --rc geninfo_unexecuted_blocks=1 00:11:52.510 00:11:52.510 ' 00:11:52.510 09:44:31 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.510 09:44:31 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.510 09:44:31 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.510 09:44:31 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.510 09:44:31 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.510 09:44:31 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:52.510 09:44:31 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:52.510 09:44:31 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:52.511 09:44:31 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:52.511 09:44:31 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:52.511 09:44:31 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:52.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.034 Waiting for block devices as requested 00:11:53.034 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.034 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.034 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.294 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.594 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:58.594 09:44:37 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:58.594 09:44:37 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:58.594 09:44:37 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:58.854 09:44:37 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:58.854 09:44:37 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:58.854 No valid GPT data, bailing 00:11:58.854 09:44:37 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:58.854 09:44:37 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:58.854 09:44:37 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:58.854 09:44:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:58.854 09:44:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.854 09:44:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.854 09:44:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.854 ************************************ 00:11:58.854 START TEST xnvme_rpc 00:11:58.854 ************************************ 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68947 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68947 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68947 ']' 00:11:58.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.854 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.855 09:44:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:58.855 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.855 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.855 09:44:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.116 [2024-11-28 09:44:37.767008] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:59.116 [2024-11-28 09:44:37.767171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68947 ] 00:11:59.116 [2024-11-28 09:44:37.928693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.379 [2024-11-28 09:44:38.054027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 xnvme_bdev 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68947 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68947 ']' 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68947 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68947 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.214 killing process with pid 68947 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68947' 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68947 00:12:00.214 09:44:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68947 00:12:02.176 00:12:02.176 real 0m2.854s 00:12:02.176 user 0m2.848s 00:12:02.176 sys 0m0.458s 00:12:02.176 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.176 ************************************ 00:12:02.176 END TEST xnvme_rpc 00:12:02.176 ************************************ 00:12:02.176 09:44:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.176 09:44:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:02.176 09:44:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.176 09:44:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.176 09:44:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:02.176 ************************************ 00:12:02.176 START TEST xnvme_bdevperf 00:12:02.176 ************************************ 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:02.176 09:44:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:02.176 { 00:12:02.176 "subsystems": [ 00:12:02.176 { 00:12:02.176 "subsystem": "bdev", 00:12:02.176 "config": [ 00:12:02.176 { 00:12:02.176 "params": { 00:12:02.176 "io_mechanism": "libaio", 00:12:02.176 "conserve_cpu": false, 00:12:02.176 "filename": "/dev/nvme0n1", 00:12:02.176 "name": "xnvme_bdev" 00:12:02.176 }, 00:12:02.176 "method": "bdev_xnvme_create" 00:12:02.176 }, 00:12:02.176 { 00:12:02.176 "method": "bdev_wait_for_examine" 00:12:02.176 } 00:12:02.176 ] 00:12:02.176 } 00:12:02.176 ] 00:12:02.176 } 00:12:02.176 [2024-11-28 09:44:40.678585] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:02.176 [2024-11-28 09:44:40.678729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69021 ] 00:12:02.176 [2024-11-28 09:44:40.843150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.176 [2024-11-28 09:44:40.943416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.437 Running I/O for 5 seconds... 00:12:04.768 29643.00 IOPS, 115.79 MiB/s [2024-11-28T09:44:44.593Z] 28855.00 IOPS, 112.71 MiB/s [2024-11-28T09:44:45.539Z] 27614.33 IOPS, 107.87 MiB/s [2024-11-28T09:44:46.483Z] 27338.25 IOPS, 106.79 MiB/s 00:12:07.603 Latency(us) 00:12:07.603 [2024-11-28T09:44:46.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.603 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:07.603 xnvme_bdev : 5.00 27479.97 107.34 0.00 0.00 2323.65 374.94 10586.58 00:12:07.603 [2024-11-28T09:44:46.483Z] =================================================================================================================== 00:12:07.603 [2024-11-28T09:44:46.483Z] Total : 27479.97 107.34 0.00 0.00 2323.65 374.94 10586.58 00:12:08.176 09:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:08.177 09:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:08.177 09:44:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:08.177 09:44:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:08.177 09:44:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:08.437 { 00:12:08.437 "subsystems": [ 00:12:08.437 { 00:12:08.437 "subsystem": "bdev", 00:12:08.437 "config": [ 00:12:08.437 { 00:12:08.437 "params": { 00:12:08.437 "io_mechanism": "libaio", 00:12:08.437 "conserve_cpu": false, 00:12:08.437 "filename": "/dev/nvme0n1", 00:12:08.437 "name": "xnvme_bdev" 00:12:08.437 }, 00:12:08.437 "method": "bdev_xnvme_create" 00:12:08.438 }, 00:12:08.438 { 00:12:08.438 "method": "bdev_wait_for_examine" 00:12:08.438 } 00:12:08.438 ] 00:12:08.438 } 00:12:08.438 ] 00:12:08.438 } 00:12:08.438 [2024-11-28 09:44:47.101092] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:08.438 [2024-11-28 09:44:47.101256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69102 ] 00:12:08.438 [2024-11-28 09:44:47.266266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.700 [2024-11-28 09:44:47.383888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.996 Running I/O for 5 seconds... 00:12:10.904 31186.00 IOPS, 121.82 MiB/s [2024-11-28T09:44:50.729Z] 18124.00 IOPS, 70.80 MiB/s [2024-11-28T09:44:52.118Z] 13437.00 IOPS, 52.49 MiB/s [2024-11-28T09:44:52.691Z] 11055.50 IOPS, 43.19 MiB/s [2024-11-28T09:44:52.952Z] 9637.00 IOPS, 37.64 MiB/s 00:12:14.072 Latency(us) 00:12:14.072 [2024-11-28T09:44:52.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.072 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:14.072 xnvme_bdev : 5.02 9614.26 37.56 0.00 0.00 6639.09 60.26 34280.37 00:12:14.072 [2024-11-28T09:44:52.952Z] =================================================================================================================== 00:12:14.072 [2024-11-28T09:44:52.952Z] Total : 9614.26 37.56 0.00 0.00 6639.09 60.26 34280.37 00:12:14.645 00:12:14.645 real 0m12.899s 00:12:14.645 user 0m7.442s 00:12:14.645 sys 0m4.330s 00:12:14.645 09:44:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.645 ************************************ 00:12:14.645 END TEST xnvme_bdevperf 00:12:14.645 ************************************ 00:12:14.645 09:44:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.907 09:44:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:14.907 09:44:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.907 09:44:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.907 09:44:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.907 ************************************ 00:12:14.907 START TEST xnvme_fio_plugin 00:12:14.907 ************************************ 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:14.907 09:44:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.907 { 00:12:14.907 "subsystems": [ 00:12:14.907 { 00:12:14.907 "subsystem": "bdev", 00:12:14.907 "config": [ 00:12:14.907 { 00:12:14.907 "params": { 00:12:14.907 "io_mechanism": "libaio", 00:12:14.907 "conserve_cpu": false, 00:12:14.907 "filename": "/dev/nvme0n1", 00:12:14.907 "name": "xnvme_bdev" 00:12:14.907 }, 00:12:14.907 "method": "bdev_xnvme_create" 00:12:14.907 }, 00:12:14.907 { 00:12:14.907 "method": "bdev_wait_for_examine" 00:12:14.907 } 00:12:14.907 ] 00:12:14.907 } 00:12:14.907 ] 00:12:14.907 } 00:12:14.907 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:14.907 fio-3.35 00:12:14.907 Starting 1 thread 00:12:21.504 00:12:21.504 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69216: Thu Nov 28 09:44:59 2024 00:12:21.504 read: IOPS=35.5k, BW=139MiB/s (145MB/s)(693MiB/5001msec) 00:12:21.504 slat (usec): min=4, max=2180, avg=20.46, stdev=83.07 00:12:21.504 clat (usec): min=54, max=10082, avg=1281.63, stdev=613.27 00:12:21.504 lat (usec): min=145, max=10148, avg=1302.09, stdev=610.16 00:12:21.504 clat percentiles (usec): 00:12:21.504 | 1.00th=[ 265], 5.00th=[ 449], 10.00th=[ 603], 20.00th=[ 791], 00:12:21.504 | 30.00th=[ 938], 40.00th=[ 1074], 50.00th=[ 1205], 60.00th=[ 1352], 00:12:21.504 | 70.00th=[ 1500], 80.00th=[ 1696], 90.00th=[ 2024], 95.00th=[ 2376], 00:12:21.504 | 99.00th=[ 3195], 99.50th=[ 3589], 99.90th=[ 4424], 99.95th=[ 5014], 00:12:21.504 | 99.99th=[ 8356] 00:12:21.504 bw ( KiB/s): min=125840, max=152016, per=98.90%, avg=140432.00, stdev=8662.27, samples=9 00:12:21.504 iops : min=31460, max=38004, avg=35108.00, stdev=2165.57, samples=9 00:12:21.504 lat (usec) : 100=0.01%, 250=0.80%, 500=5.59%, 750=11.27%, 1000=17.06% 00:12:21.504 lat (msec) : 2=54.86%, 4=10.18%, 10=0.23%, 20=0.01% 00:12:21.504 cpu : usr=39.62%, sys=49.06%, ctx=28, majf=0, minf=764 00:12:21.504 IO depths : 1=0.2%, 2=0.7%, 4=2.1%, 8=7.1%, 16=23.1%, 32=64.5%, >=64=2.3% 00:12:21.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.504 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:21.504 issued rwts: total=177524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:21.504 00:12:21.504 Run status group 0 (all jobs): 00:12:21.504 READ: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=693MiB (727MB), run=5001-5001msec 00:12:21.766 ----------------------------------------------------- 00:12:21.766 Suppressions used: 00:12:21.766 count bytes template 00:12:21.766 1 11 /usr/src/fio/parse.c 00:12:21.766 1 8 libtcmalloc_minimal.so 00:12:21.766 1 904 libcrypto.so 00:12:21.766 ----------------------------------------------------- 00:12:21.766 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:21.766 09:45:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.766 { 00:12:21.766 "subsystems": [ 00:12:21.766 { 00:12:21.766 "subsystem": "bdev", 00:12:21.766 "config": [ 00:12:21.766 { 00:12:21.766 "params": { 00:12:21.766 "io_mechanism": "libaio", 00:12:21.766 "conserve_cpu": false, 00:12:21.766 "filename": "/dev/nvme0n1", 00:12:21.766 "name": "xnvme_bdev" 00:12:21.766 }, 00:12:21.766 "method": "bdev_xnvme_create" 00:12:21.766 }, 00:12:21.766 { 00:12:21.766 "method": "bdev_wait_for_examine" 00:12:21.766 } 00:12:21.766 ] 00:12:21.766 } 00:12:21.766 ] 00:12:21.766 } 00:12:22.029 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:22.029 fio-3.35 00:12:22.029 Starting 1 thread 00:12:28.619 00:12:28.619 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69307: Thu Nov 28 09:45:06 2024 00:12:28.619 write: IOPS=38.1k, BW=149MiB/s (156MB/s)(743MiB/5001msec); 0 zone resets 00:12:28.619 slat (usec): min=4, max=1784, avg=20.39, stdev=66.21 00:12:28.619 clat (usec): min=106, max=4874, avg=1123.43, stdev=545.75 00:12:28.619 lat (usec): min=188, max=4879, avg=1143.82, stdev=543.69 00:12:28.619 clat percentiles (usec): 00:12:28.619 | 1.00th=[ 251], 5.00th=[ 383], 10.00th=[ 502], 20.00th=[ 685], 00:12:28.619 | 30.00th=[ 824], 40.00th=[ 938], 50.00th=[ 1057], 60.00th=[ 1172], 00:12:28.619 | 70.00th=[ 1303], 80.00th=[ 1483], 90.00th=[ 1795], 95.00th=[ 2114], 00:12:28.619 | 99.00th=[ 2966], 99.50th=[ 3261], 99.90th=[ 3916], 99.95th=[ 4047], 00:12:28.619 | 99.99th=[ 4555] 00:12:28.619 bw ( KiB/s): min=148640, max=163536, per=100.00%, avg=155975.22, stdev=5358.35, samples=9 00:12:28.619 iops : min=37160, max=40884, avg=38993.78, stdev=1339.59, samples=9 00:12:28.619 lat (usec) : 250=0.98%, 500=8.89%, 750=14.47%, 1000=20.93% 00:12:28.619 lat (msec) : 2=48.35%, 4=6.31%, 10=0.07% 00:12:28.619 cpu : usr=35.70%, sys=49.54%, ctx=95, majf=0, minf=765 00:12:28.619 IO depths : 1=0.3%, 2=0.9%, 4=3.0%, 8=9.4%, 16=24.7%, 32=59.6%, >=64=2.0% 00:12:28.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.619 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:28.619 issued rwts: total=0,190292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:28.619 00:12:28.619 Run status group 0 (all jobs): 00:12:28.619 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=743MiB (779MB), run=5001-5001msec 00:12:28.619 ----------------------------------------------------- 00:12:28.619 Suppressions used: 00:12:28.619 count bytes template 00:12:28.619 1 11 /usr/src/fio/parse.c 00:12:28.619 1 8 libtcmalloc_minimal.so 00:12:28.619 1 904 libcrypto.so 00:12:28.619 ----------------------------------------------------- 00:12:28.619 00:12:28.619 00:12:28.619 real 0m13.809s 00:12:28.619 user 0m6.571s 00:12:28.619 sys 0m5.535s 00:12:28.619 09:45:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.619 ************************************ 00:12:28.619 END TEST xnvme_fio_plugin 00:12:28.619 ************************************ 00:12:28.619 09:45:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:28.619 09:45:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:28.619 09:45:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:28.619 09:45:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:28.619 09:45:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:28.619 09:45:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.619 09:45:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.619 09:45:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.619 ************************************ 00:12:28.619 START TEST xnvme_rpc 00:12:28.619 ************************************ 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69388 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69388 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69388 ']' 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.619 09:45:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:28.878 [2024-11-28 09:45:07.545487] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:28.879 [2024-11-28 09:45:07.545626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69388 ] 00:12:28.879 [2024-11-28 09:45:07.711014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.139 [2024-11-28 09:45:07.829933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.713 xnvme_bdev 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:29.713 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69388 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69388 ']' 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69388 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69388 00:12:29.974 killing process with pid 69388 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69388' 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69388 00:12:29.974 09:45:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69388 00:12:31.893 00:12:31.893 real 0m2.910s 00:12:31.893 user 0m2.876s 00:12:31.893 sys 0m0.492s 00:12:31.893 09:45:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.893 ************************************ 00:12:31.893 END TEST xnvme_rpc 00:12:31.893 ************************************ 00:12:31.893 09:45:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.893 09:45:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.893 09:45:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.893 09:45:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.893 09:45:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.893 ************************************ 00:12:31.893 START TEST xnvme_bdevperf 00:12:31.893 ************************************ 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.893 09:45:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.893 { 00:12:31.893 "subsystems": [ 00:12:31.893 { 00:12:31.893 "subsystem": "bdev", 00:12:31.893 "config": [ 00:12:31.894 { 00:12:31.894 "params": { 00:12:31.894 "io_mechanism": "libaio", 00:12:31.894 "conserve_cpu": true, 00:12:31.894 "filename": "/dev/nvme0n1", 00:12:31.894 "name": "xnvme_bdev" 00:12:31.894 }, 00:12:31.894 "method": "bdev_xnvme_create" 00:12:31.894 }, 00:12:31.894 { 00:12:31.894 "method": "bdev_wait_for_examine" 00:12:31.894 } 00:12:31.894 ] 00:12:31.894 } 00:12:31.894 ] 00:12:31.894 } 00:12:31.894 [2024-11-28 09:45:10.505105] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:31.894 [2024-11-28 09:45:10.505273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69462 ] 00:12:31.894 [2024-11-28 09:45:10.669508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.155 [2024-11-28 09:45:10.791330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.417 Running I/O for 5 seconds... 00:12:34.305 30009.00 IOPS, 117.22 MiB/s [2024-11-28T09:45:14.128Z] 31765.00 IOPS, 124.08 MiB/s [2024-11-28T09:45:15.513Z] 32403.67 IOPS, 126.58 MiB/s [2024-11-28T09:45:16.456Z] 31752.50 IOPS, 124.03 MiB/s 00:12:37.576 Latency(us) 00:12:37.576 [2024-11-28T09:45:16.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.576 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.576 xnvme_bdev : 5.00 32195.34 125.76 0.00 0.00 1983.24 319.80 6251.13 00:12:37.576 [2024-11-28T09:45:16.456Z] =================================================================================================================== 00:12:37.576 [2024-11-28T09:45:16.456Z] Total : 32195.34 125.76 0.00 0.00 1983.24 319.80 6251.13 00:12:38.148 09:45:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:38.148 09:45:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:38.148 09:45:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:38.148 09:45:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:38.148 09:45:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.148 { 00:12:38.148 "subsystems": [ 00:12:38.148 { 00:12:38.148 "subsystem": "bdev", 00:12:38.148 "config": [ 00:12:38.148 { 00:12:38.148 "params": { 00:12:38.148 "io_mechanism": "libaio", 00:12:38.148 "conserve_cpu": true, 00:12:38.148 "filename": "/dev/nvme0n1", 00:12:38.148 "name": "xnvme_bdev" 00:12:38.148 }, 00:12:38.148 "method": "bdev_xnvme_create" 00:12:38.148 }, 00:12:38.148 { 00:12:38.148 "method": "bdev_wait_for_examine" 00:12:38.148 } 00:12:38.148 ] 00:12:38.148 } 00:12:38.148 ] 00:12:38.148 } 00:12:38.148 [2024-11-28 09:45:16.958032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:38.148 [2024-11-28 09:45:16.958375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69543 ] 00:12:38.410 [2024-11-28 09:45:17.122656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.410 [2024-11-28 09:45:17.239244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.672 Running I/O for 5 seconds... 00:12:40.697 5149.00 IOPS, 20.11 MiB/s [2024-11-28T09:45:20.963Z] 5330.50 IOPS, 20.82 MiB/s [2024-11-28T09:45:21.907Z] 5331.67 IOPS, 20.83 MiB/s [2024-11-28T09:45:22.851Z] 5384.50 IOPS, 21.03 MiB/s [2024-11-28T09:45:22.851Z] 5374.60 IOPS, 20.99 MiB/s 00:12:43.971 Latency(us) 00:12:43.971 [2024-11-28T09:45:22.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.971 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:43.971 xnvme_bdev : 5.01 5373.62 20.99 0.00 0.00 11890.05 53.17 37910.06 00:12:43.971 [2024-11-28T09:45:22.851Z] =================================================================================================================== 00:12:43.971 [2024-11-28T09:45:22.851Z] Total : 5373.62 20.99 0.00 0.00 11890.05 53.17 37910.06 00:12:44.545 00:12:44.545 real 0m12.932s 00:12:44.545 user 0m8.127s 00:12:44.545 sys 0m3.608s 00:12:44.545 09:45:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.545 ************************************ 00:12:44.545 END TEST xnvme_bdevperf 00:12:44.545 ************************************ 00:12:44.545 09:45:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.545 09:45:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:44.545 09:45:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.545 09:45:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.545 09:45:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.807 ************************************ 00:12:44.807 START TEST xnvme_fio_plugin 00:12:44.807 ************************************ 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:44.807 09:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.807 { 00:12:44.807 "subsystems": [ 00:12:44.807 { 00:12:44.807 "subsystem": "bdev", 00:12:44.807 "config": [ 00:12:44.807 { 00:12:44.807 "params": { 00:12:44.807 "io_mechanism": "libaio", 00:12:44.807 "conserve_cpu": true, 00:12:44.807 "filename": "/dev/nvme0n1", 00:12:44.807 "name": "xnvme_bdev" 00:12:44.807 }, 00:12:44.807 "method": "bdev_xnvme_create" 00:12:44.807 }, 00:12:44.807 { 00:12:44.807 "method": "bdev_wait_for_examine" 00:12:44.807 } 00:12:44.807 ] 00:12:44.807 } 00:12:44.807 ] 00:12:44.807 } 00:12:44.807 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:44.807 fio-3.35 00:12:44.807 Starting 1 thread 00:12:51.402 00:12:51.402 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69662: Thu Nov 28 09:45:29 2024 00:12:51.402 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(665MiB/5001msec) 00:12:51.402 slat (usec): min=4, max=2355, avg=20.51, stdev=89.36 00:12:51.402 clat (usec): min=106, max=4747, avg=1324.28, stdev=539.03 00:12:51.402 lat (usec): min=177, max=4832, avg=1344.78, stdev=531.54 00:12:51.402 clat percentiles (usec): 00:12:51.402 | 1.00th=[ 273], 5.00th=[ 482], 10.00th=[ 652], 20.00th=[ 873], 00:12:51.402 | 30.00th=[ 1045], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1418], 00:12:51.402 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 1975], 95.00th=[ 2245], 00:12:51.402 | 99.00th=[ 2933], 99.50th=[ 3195], 99.90th=[ 3720], 99.95th=[ 3916], 00:12:51.402 | 99.99th=[ 4228] 00:12:51.402 bw ( KiB/s): min=131672, max=145936, per=100.00%, avg=137647.11, stdev=5192.08, samples=9 00:12:51.402 iops : min=32918, max=36484, avg=34411.78, stdev=1298.02, samples=9 00:12:51.402 lat (usec) : 250=0.72%, 500=4.72%, 750=8.44%, 1000=13.33% 00:12:51.402 lat (msec) : 2=63.38%, 4=9.38%, 10=0.03% 00:12:51.402 cpu : usr=43.20%, sys=48.28%, ctx=17, majf=0, minf=764 00:12:51.402 IO depths : 1=0.6%, 2=1.3%, 4=3.4%, 8=9.0%, 16=23.5%, 32=60.1%, >=64=2.0% 00:12:51.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.402 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:51.402 issued rwts: total=170201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:51.402 00:12:51.402 Run status group 0 (all jobs): 00:12:51.402 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=665MiB (697MB), run=5001-5001msec 00:12:51.664 ----------------------------------------------------- 00:12:51.664 Suppressions used: 00:12:51.664 count bytes template 00:12:51.664 1 11 /usr/src/fio/parse.c 00:12:51.664 1 8 libtcmalloc_minimal.so 00:12:51.664 1 904 libcrypto.so 00:12:51.664 ----------------------------------------------------- 00:12:51.664 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:51.664 09:45:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.664 { 00:12:51.664 "subsystems": [ 00:12:51.664 { 00:12:51.664 "subsystem": "bdev", 00:12:51.664 "config": [ 00:12:51.665 { 00:12:51.665 "params": { 00:12:51.665 "io_mechanism": "libaio", 00:12:51.665 "conserve_cpu": true, 00:12:51.665 "filename": "/dev/nvme0n1", 00:12:51.665 "name": "xnvme_bdev" 00:12:51.665 }, 00:12:51.665 "method": "bdev_xnvme_create" 00:12:51.665 }, 00:12:51.665 { 00:12:51.665 "method": "bdev_wait_for_examine" 00:12:51.665 } 00:12:51.665 ] 00:12:51.665 } 00:12:51.665 ] 00:12:51.665 } 00:12:51.927 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:51.927 fio-3.35 00:12:51.927 Starting 1 thread 00:12:58.522 00:12:58.522 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69754: Thu Nov 28 09:45:36 2024 00:12:58.522 write: IOPS=35.9k, BW=140MiB/s (147MB/s)(702MiB/5001msec); 0 zone resets 00:12:58.522 slat (usec): min=4, max=1980, avg=19.94, stdev=83.76 00:12:58.522 clat (usec): min=35, max=9344, avg=1244.34, stdev=517.56 00:12:58.522 lat (usec): min=82, max=9350, avg=1264.28, stdev=511.41 00:12:58.522 clat percentiles (usec): 00:12:58.522 | 1.00th=[ 277], 5.00th=[ 478], 10.00th=[ 627], 20.00th=[ 832], 00:12:58.522 | 30.00th=[ 971], 40.00th=[ 1090], 50.00th=[ 1205], 60.00th=[ 1319], 00:12:58.522 | 70.00th=[ 1450], 80.00th=[ 1598], 90.00th=[ 1860], 95.00th=[ 2114], 00:12:58.522 | 99.00th=[ 2835], 99.50th=[ 3130], 99.90th=[ 3687], 99.95th=[ 4015], 00:12:58.522 | 99.99th=[ 7439] 00:12:58.522 bw ( KiB/s): min=127960, max=156656, per=100.00%, avg=143690.67, stdev=9162.33, samples=9 00:12:58.522 iops : min=31990, max=39164, avg=35922.67, stdev=2290.58, samples=9 00:12:58.522 lat (usec) : 50=0.01%, 100=0.01%, 250=0.71%, 500=4.92%, 750=9.85% 00:12:58.522 lat (usec) : 1000=16.49% 00:12:58.522 lat (msec) : 2=61.00%, 4=6.98%, 10=0.05% 00:12:58.522 cpu : usr=43.14%, sys=47.70%, ctx=11, majf=0, minf=765 00:12:58.522 IO depths : 1=0.5%, 2=1.2%, 4=3.3%, 8=8.9%, 16=23.5%, 32=60.5%, >=64=2.1% 00:12:58.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.522 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:58.522 issued rwts: total=0,179596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.522 00:12:58.522 Run status group 0 (all jobs): 00:12:58.522 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=702MiB (736MB), run=5001-5001msec 00:12:58.522 ----------------------------------------------------- 00:12:58.522 Suppressions used: 00:12:58.522 count bytes template 00:12:58.522 1 11 /usr/src/fio/parse.c 00:12:58.522 1 8 libtcmalloc_minimal.so 00:12:58.522 1 904 libcrypto.so 00:12:58.522 ----------------------------------------------------- 00:12:58.522 00:12:58.522 00:12:58.522 real 0m13.875s 00:12:58.522 user 0m7.151s 00:12:58.522 sys 0m5.436s 00:12:58.522 09:45:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.522 ************************************ 00:12:58.522 END TEST xnvme_fio_plugin 00:12:58.522 ************************************ 00:12:58.522 09:45:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:58.522 09:45:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:58.522 09:45:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.522 09:45:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.522 09:45:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.522 ************************************ 00:12:58.522 START TEST xnvme_rpc 00:12:58.522 ************************************ 00:12:58.522 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69840 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69840 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69840 ']' 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.523 09:45:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.784 [2024-11-28 09:45:37.457678] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:58.784 [2024-11-28 09:45:37.457839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69840 ] 00:12:58.784 [2024-11-28 09:45:37.622098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.045 [2024-11-28 09:45:37.745185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 xnvme_bdev 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69840 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69840 ']' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69840 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69840 00:12:59.878 killing process with pid 69840 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69840' 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69840 00:12:59.878 09:45:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69840 00:13:01.792 00:13:01.792 real 0m2.901s 00:13:01.792 user 0m2.892s 00:13:01.792 sys 0m0.490s 00:13:01.793 09:45:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.793 ************************************ 00:13:01.793 END TEST xnvme_rpc 00:13:01.793 ************************************ 00:13:01.793 09:45:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.793 09:45:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:01.793 09:45:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.793 09:45:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.793 09:45:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.793 ************************************ 00:13:01.793 START TEST xnvme_bdevperf 00:13:01.793 ************************************ 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.793 09:45:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.793 { 00:13:01.793 "subsystems": [ 00:13:01.793 { 00:13:01.793 "subsystem": "bdev", 00:13:01.793 "config": [ 00:13:01.793 { 00:13:01.793 "params": { 00:13:01.793 "io_mechanism": "io_uring", 00:13:01.793 "conserve_cpu": false, 00:13:01.793 "filename": "/dev/nvme0n1", 00:13:01.793 "name": "xnvme_bdev" 00:13:01.793 }, 00:13:01.793 "method": "bdev_xnvme_create" 00:13:01.793 }, 00:13:01.793 { 00:13:01.793 "method": "bdev_wait_for_examine" 00:13:01.793 } 00:13:01.793 ] 00:13:01.793 } 00:13:01.793 ] 00:13:01.793 } 00:13:01.793 [2024-11-28 09:45:40.416410] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:01.793 [2024-11-28 09:45:40.416556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69912 ] 00:13:01.793 [2024-11-28 09:45:40.581618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.054 [2024-11-28 09:45:40.706511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.322 Running I/O for 5 seconds... 00:13:04.213 35859.00 IOPS, 140.07 MiB/s [2024-11-28T09:45:44.035Z] 36190.00 IOPS, 141.37 MiB/s [2024-11-28T09:45:45.420Z] 35798.00 IOPS, 139.84 MiB/s [2024-11-28T09:45:46.363Z] 35211.00 IOPS, 137.54 MiB/s [2024-11-28T09:45:46.363Z] 34926.20 IOPS, 136.43 MiB/s 00:13:07.483 Latency(us) 00:13:07.483 [2024-11-28T09:45:46.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.483 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:07.483 xnvme_bdev : 5.00 34891.56 136.30 0.00 0.00 1828.66 316.65 13208.02 00:13:07.483 [2024-11-28T09:45:46.363Z] =================================================================================================================== 00:13:07.483 [2024-11-28T09:45:46.363Z] Total : 34891.56 136.30 0.00 0.00 1828.66 316.65 13208.02 00:13:08.057 09:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:08.057 09:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:08.057 09:45:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:08.057 09:45:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:08.057 09:45:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:08.057 { 00:13:08.057 "subsystems": [ 00:13:08.057 { 00:13:08.057 "subsystem": "bdev", 00:13:08.057 "config": [ 00:13:08.057 { 00:13:08.057 "params": { 00:13:08.057 "io_mechanism": "io_uring", 00:13:08.057 "conserve_cpu": false, 00:13:08.057 "filename": "/dev/nvme0n1", 00:13:08.057 "name": "xnvme_bdev" 00:13:08.057 }, 00:13:08.057 "method": "bdev_xnvme_create" 00:13:08.057 }, 00:13:08.057 { 00:13:08.057 "method": "bdev_wait_for_examine" 00:13:08.057 } 00:13:08.057 ] 00:13:08.057 } 00:13:08.057 ] 00:13:08.057 } 00:13:08.057 [2024-11-28 09:45:46.797336] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:08.057 [2024-11-28 09:45:46.797474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69985 ] 00:13:08.318 [2024-11-28 09:45:46.960604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.318 [2024-11-28 09:45:47.082313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.579 Running I/O for 5 seconds... 00:13:10.912 8489.00 IOPS, 33.16 MiB/s [2024-11-28T09:45:50.734Z] 8539.50 IOPS, 33.36 MiB/s [2024-11-28T09:45:51.674Z] 8569.00 IOPS, 33.47 MiB/s [2024-11-28T09:45:52.664Z] 8566.25 IOPS, 33.46 MiB/s [2024-11-28T09:45:52.664Z] 8568.00 IOPS, 33.47 MiB/s 00:13:13.784 Latency(us) 00:13:13.784 [2024-11-28T09:45:52.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.784 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:13.784 xnvme_bdev : 5.01 8561.87 33.44 0.00 0.00 7460.60 69.71 27625.94 00:13:13.784 [2024-11-28T09:45:52.664Z] =================================================================================================================== 00:13:13.784 [2024-11-28T09:45:52.664Z] Total : 8561.87 33.44 0.00 0.00 7460.60 69.71 27625.94 00:13:14.356 00:13:14.356 real 0m12.834s 00:13:14.356 user 0m5.842s 00:13:14.356 sys 0m6.718s 00:13:14.356 ************************************ 00:13:14.356 END TEST xnvme_bdevperf 00:13:14.356 ************************************ 00:13:14.356 09:45:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.356 09:45:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:14.356 09:45:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:14.356 09:45:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.356 09:45:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.356 09:45:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 ************************************ 00:13:14.616 START TEST xnvme_fio_plugin 00:13:14.616 ************************************ 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.616 09:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.616 { 00:13:14.616 "subsystems": [ 00:13:14.616 { 00:13:14.616 "subsystem": "bdev", 00:13:14.616 "config": [ 00:13:14.616 { 00:13:14.616 "params": { 00:13:14.616 "io_mechanism": "io_uring", 00:13:14.616 "conserve_cpu": false, 00:13:14.616 "filename": "/dev/nvme0n1", 00:13:14.616 "name": "xnvme_bdev" 00:13:14.616 }, 00:13:14.616 "method": "bdev_xnvme_create" 00:13:14.616 }, 00:13:14.616 { 00:13:14.616 "method": "bdev_wait_for_examine" 00:13:14.616 } 00:13:14.616 ] 00:13:14.616 } 00:13:14.616 ] 00:13:14.616 } 00:13:14.616 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.616 fio-3.35 00:13:14.616 Starting 1 thread 00:13:21.278 00:13:21.278 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70104: Thu Nov 28 09:45:59 2024 00:13:21.278 read: IOPS=36.6k, BW=143MiB/s (150MB/s)(714MiB/5001msec) 00:13:21.278 slat (usec): min=2, max=225, avg= 3.71, stdev= 2.53 00:13:21.278 clat (usec): min=878, max=4705, avg=1598.98, stdev=300.99 00:13:21.278 lat (usec): min=881, max=4719, avg=1602.68, stdev=301.37 00:13:21.278 clat percentiles (usec): 00:13:21.278 | 1.00th=[ 1090], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1352], 00:13:21.278 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1565], 60.00th=[ 1631], 00:13:21.278 | 70.00th=[ 1729], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2147], 00:13:21.278 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 3032], 99.95th=[ 3163], 00:13:21.278 | 99.99th=[ 4621] 00:13:21.278 bw ( KiB/s): min=130048, max=179712, per=100.00%, avg=147024.89, stdev=14683.00, samples=9 00:13:21.278 iops : min=32512, max=44928, avg=36756.22, stdev=3670.75, samples=9 00:13:21.278 lat (usec) : 1000=0.13% 00:13:21.278 lat (msec) : 2=90.08%, 4=9.75%, 10=0.04% 00:13:21.278 cpu : usr=31.18%, sys=67.04%, ctx=40, majf=0, minf=762 00:13:21.278 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:21.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.278 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:21.278 issued rwts: total=182848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.278 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.278 00:13:21.278 Run status group 0 (all jobs): 00:13:21.278 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=714MiB (749MB), run=5001-5001msec 00:13:21.540 ----------------------------------------------------- 00:13:21.540 Suppressions used: 00:13:21.540 count bytes template 00:13:21.540 1 11 /usr/src/fio/parse.c 00:13:21.540 1 8 libtcmalloc_minimal.so 00:13:21.540 1 904 libcrypto.so 00:13:21.540 ----------------------------------------------------- 00:13:21.540 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:21.540 09:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.540 { 00:13:21.540 "subsystems": [ 00:13:21.540 { 00:13:21.540 "subsystem": "bdev", 00:13:21.540 "config": [ 00:13:21.540 { 00:13:21.540 "params": { 00:13:21.540 "io_mechanism": "io_uring", 00:13:21.540 "conserve_cpu": false, 00:13:21.540 "filename": "/dev/nvme0n1", 00:13:21.540 "name": "xnvme_bdev" 00:13:21.540 }, 00:13:21.540 "method": "bdev_xnvme_create" 00:13:21.540 }, 00:13:21.540 { 00:13:21.540 "method": "bdev_wait_for_examine" 00:13:21.540 } 00:13:21.540 ] 00:13:21.540 } 00:13:21.540 ] 00:13:21.540 } 00:13:21.541 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:21.541 fio-3.35 00:13:21.541 Starting 1 thread 00:13:28.132 00:13:28.132 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70196: Thu Nov 28 09:46:06 2024 00:13:28.132 write: IOPS=36.0k, BW=141MiB/s (148MB/s)(704MiB/5002msec); 0 zone resets 00:13:28.132 slat (usec): min=2, max=137, avg= 3.74, stdev= 2.18 00:13:28.132 clat (usec): min=182, max=9848, avg=1624.07, stdev=345.11 00:13:28.132 lat (usec): min=185, max=9851, avg=1627.81, stdev=345.54 00:13:28.132 clat percentiles (usec): 00:13:28.132 | 1.00th=[ 1045], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1336], 00:13:28.132 | 30.00th=[ 1434], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1680], 00:13:28.132 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2040], 95.00th=[ 2212], 00:13:28.132 | 99.00th=[ 2573], 99.50th=[ 2802], 99.90th=[ 3392], 99.95th=[ 4178], 00:13:28.132 | 99.99th=[ 7832] 00:13:28.132 bw ( KiB/s): min=135088, max=170992, per=98.61%, avg=142168.89, stdev=11277.86, samples=9 00:13:28.132 iops : min=33772, max=42748, avg=35542.44, stdev=2819.32, samples=9 00:13:28.132 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.35% 00:13:28.132 lat (msec) : 2=87.70%, 4=11.85%, 10=0.07% 00:13:28.132 cpu : usr=33.85%, sys=64.85%, ctx=12, majf=0, minf=763 00:13:28.132 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:28.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.132 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:28.132 issued rwts: total=0,180286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.132 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:28.132 00:13:28.132 Run status group 0 (all jobs): 00:13:28.132 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=704MiB (738MB), run=5002-5002msec 00:13:28.393 ----------------------------------------------------- 00:13:28.393 Suppressions used: 00:13:28.393 count bytes template 00:13:28.393 1 11 /usr/src/fio/parse.c 00:13:28.393 1 8 libtcmalloc_minimal.so 00:13:28.393 1 904 libcrypto.so 00:13:28.393 ----------------------------------------------------- 00:13:28.393 00:13:28.393 00:13:28.393 real 0m13.935s 00:13:28.393 user 0m6.251s 00:13:28.393 sys 0m7.198s 00:13:28.393 09:46:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.393 ************************************ 00:13:28.393 END TEST xnvme_fio_plugin 00:13:28.393 ************************************ 00:13:28.393 09:46:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.393 09:46:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:28.393 09:46:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:28.393 09:46:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:28.393 09:46:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:28.393 09:46:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.393 09:46:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.393 09:46:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.393 ************************************ 00:13:28.393 START TEST xnvme_rpc 00:13:28.393 ************************************ 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:28.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70283 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70283 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70283 ']' 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.393 09:46:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:28.687 [2024-11-28 09:46:07.340928] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:28.687 [2024-11-28 09:46:07.341099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70283 ] 00:13:28.687 [2024-11-28 09:46:07.510885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.948 [2024-11-28 09:46:07.632108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.520 xnvme_bdev 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.520 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:29.781 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70283 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70283 ']' 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70283 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70283 00:13:29.782 killing process with pid 70283 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70283' 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70283 00:13:29.782 09:46:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70283 00:13:31.697 ************************************ 00:13:31.697 END TEST xnvme_rpc 00:13:31.697 ************************************ 00:13:31.697 00:13:31.697 real 0m2.983s 00:13:31.697 user 0m2.957s 00:13:31.697 sys 0m0.519s 00:13:31.697 09:46:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.697 09:46:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.697 09:46:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:31.697 09:46:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.697 09:46:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.697 09:46:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.697 ************************************ 00:13:31.697 START TEST xnvme_bdevperf 00:13:31.697 ************************************ 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.697 09:46:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.697 { 00:13:31.697 "subsystems": [ 00:13:31.697 { 00:13:31.697 "subsystem": "bdev", 00:13:31.697 "config": [ 00:13:31.697 { 00:13:31.697 "params": { 00:13:31.697 "io_mechanism": "io_uring", 00:13:31.697 "conserve_cpu": true, 00:13:31.697 "filename": "/dev/nvme0n1", 00:13:31.697 "name": "xnvme_bdev" 00:13:31.697 }, 00:13:31.697 "method": "bdev_xnvme_create" 00:13:31.697 }, 00:13:31.697 { 00:13:31.697 "method": "bdev_wait_for_examine" 00:13:31.697 } 00:13:31.697 ] 00:13:31.697 } 00:13:31.697 ] 00:13:31.697 } 00:13:31.697 [2024-11-28 09:46:10.377194] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:31.697 [2024-11-28 09:46:10.377341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70357 ] 00:13:31.697 [2024-11-28 09:46:10.540960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.958 [2024-11-28 09:46:10.664319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.219 Running I/O for 5 seconds... 00:13:34.104 34008.00 IOPS, 132.84 MiB/s [2024-11-28T09:46:14.370Z] 35292.50 IOPS, 137.86 MiB/s [2024-11-28T09:46:15.332Z] 35242.33 IOPS, 137.67 MiB/s [2024-11-28T09:46:16.274Z] 35307.75 IOPS, 137.92 MiB/s 00:13:37.394 Latency(us) 00:13:37.394 [2024-11-28T09:46:16.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.394 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:37.394 xnvme_bdev : 5.00 35383.94 138.22 0.00 0.00 1804.57 357.61 17039.36 00:13:37.394 [2024-11-28T09:46:16.274Z] =================================================================================================================== 00:13:37.394 [2024-11-28T09:46:16.274Z] Total : 35383.94 138.22 0.00 0.00 1804.57 357.61 17039.36 00:13:37.964 09:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.964 09:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:37.964 09:46:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.964 09:46:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.964 09:46:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.964 { 00:13:37.964 "subsystems": [ 00:13:37.964 { 00:13:37.964 "subsystem": "bdev", 00:13:37.964 "config": [ 00:13:37.964 { 00:13:37.964 "params": { 00:13:37.964 "io_mechanism": "io_uring", 00:13:37.964 "conserve_cpu": true, 00:13:37.964 "filename": "/dev/nvme0n1", 00:13:37.964 "name": "xnvme_bdev" 00:13:37.964 }, 00:13:37.964 "method": "bdev_xnvme_create" 00:13:37.964 }, 00:13:37.964 { 00:13:37.964 "method": "bdev_wait_for_examine" 00:13:37.964 } 00:13:37.964 ] 00:13:37.964 } 00:13:37.964 ] 00:13:37.964 } 00:13:37.964 [2024-11-28 09:46:16.834983] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:37.964 [2024-11-28 09:46:16.835136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70433 ] 00:13:38.224 [2024-11-28 09:46:17.002359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.486 [2024-11-28 09:46:17.123575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.747 Running I/O for 5 seconds... 00:13:40.634 10874.00 IOPS, 42.48 MiB/s [2024-11-28T09:46:20.456Z] 11290.50 IOPS, 44.10 MiB/s [2024-11-28T09:46:21.842Z] 11341.33 IOPS, 44.30 MiB/s [2024-11-28T09:46:22.788Z] 11351.25 IOPS, 44.34 MiB/s [2024-11-28T09:46:22.788Z] 11491.00 IOPS, 44.89 MiB/s 00:13:43.908 Latency(us) 00:13:43.908 [2024-11-28T09:46:22.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.908 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:43.908 xnvme_bdev : 5.01 11485.45 44.87 0.00 0.00 5565.88 73.65 21475.64 00:13:43.908 [2024-11-28T09:46:22.788Z] =================================================================================================================== 00:13:43.908 [2024-11-28T09:46:22.788Z] Total : 11485.45 44.87 0.00 0.00 5565.88 73.65 21475.64 00:13:44.482 00:13:44.482 real 0m12.923s 00:13:44.482 user 0m9.054s 00:13:44.482 sys 0m2.856s 00:13:44.482 09:46:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.482 ************************************ 00:13:44.482 END TEST xnvme_bdevperf 00:13:44.482 ************************************ 00:13:44.482 09:46:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 09:46:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:44.482 09:46:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.482 09:46:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.482 09:46:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 ************************************ 00:13:44.482 START TEST xnvme_fio_plugin 00:13:44.482 ************************************ 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.482 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.483 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.483 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.483 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.483 09:46:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.483 { 00:13:44.483 "subsystems": [ 00:13:44.483 { 00:13:44.483 "subsystem": "bdev", 00:13:44.483 "config": [ 00:13:44.483 { 00:13:44.483 "params": { 00:13:44.483 "io_mechanism": "io_uring", 00:13:44.483 "conserve_cpu": true, 00:13:44.483 "filename": "/dev/nvme0n1", 00:13:44.483 "name": "xnvme_bdev" 00:13:44.483 }, 00:13:44.483 "method": "bdev_xnvme_create" 00:13:44.483 }, 00:13:44.483 { 00:13:44.483 "method": "bdev_wait_for_examine" 00:13:44.483 } 00:13:44.483 ] 00:13:44.483 } 00:13:44.483 ] 00:13:44.483 } 00:13:44.744 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:44.744 fio-3.35 00:13:44.744 Starting 1 thread 00:13:51.336 00:13:51.336 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70547: Thu Nov 28 09:46:29 2024 00:13:51.336 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5002msec) 00:13:51.336 slat (nsec): min=2854, max=79784, avg=3807.85, stdev=2169.59 00:13:51.336 clat (usec): min=913, max=5661, avg=1729.73, stdev=321.47 00:13:51.336 lat (usec): min=916, max=5669, avg=1733.54, stdev=321.94 00:13:51.336 clat percentiles (usec): 00:13:51.336 | 1.00th=[ 1139], 5.00th=[ 1270], 10.00th=[ 1352], 20.00th=[ 1450], 00:13:51.336 | 30.00th=[ 1532], 40.00th=[ 1631], 50.00th=[ 1696], 60.00th=[ 1778], 00:13:51.336 | 70.00th=[ 1876], 80.00th=[ 1975], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:51.336 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2966], 99.95th=[ 3294], 00:13:51.336 | 99.99th=[ 5604] 00:13:51.336 bw ( KiB/s): min=126976, max=141824, per=99.68%, avg=135395.56, stdev=5272.05, samples=9 00:13:51.336 iops : min=31744, max=35456, avg=33848.89, stdev=1318.01, samples=9 00:13:51.336 lat (usec) : 1000=0.06% 00:13:51.336 lat (msec) : 2=81.58%, 4=18.32%, 10=0.04% 00:13:51.336 cpu : usr=54.01%, sys=42.05%, ctx=17, majf=0, minf=762 00:13:51.336 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:51.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.336 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:51.336 issued rwts: total=169856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:51.336 00:13:51.336 Run status group 0 (all jobs): 00:13:51.336 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5002-5002msec 00:13:51.336 ----------------------------------------------------- 00:13:51.336 Suppressions used: 00:13:51.336 count bytes template 00:13:51.336 1 11 /usr/src/fio/parse.c 00:13:51.336 1 8 libtcmalloc_minimal.so 00:13:51.336 1 904 libcrypto.so 00:13:51.336 ----------------------------------------------------- 00:13:51.336 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.336 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.597 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.597 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.597 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.597 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.597 09:46:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.597 { 00:13:51.597 "subsystems": [ 00:13:51.597 { 00:13:51.597 "subsystem": "bdev", 00:13:51.597 "config": [ 00:13:51.597 { 00:13:51.597 "params": { 00:13:51.597 "io_mechanism": "io_uring", 00:13:51.597 "conserve_cpu": true, 00:13:51.597 "filename": "/dev/nvme0n1", 00:13:51.597 "name": "xnvme_bdev" 00:13:51.597 }, 00:13:51.597 "method": "bdev_xnvme_create" 00:13:51.597 }, 00:13:51.597 { 00:13:51.597 "method": "bdev_wait_for_examine" 00:13:51.597 } 00:13:51.597 ] 00:13:51.597 } 00:13:51.597 ] 00:13:51.597 } 00:13:51.597 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.597 fio-3.35 00:13:51.597 Starting 1 thread 00:13:58.185 00:13:58.185 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70643: Thu Nov 28 09:46:36 2024 00:13:58.185 write: IOPS=35.4k, BW=138MiB/s (145MB/s)(692MiB/5002msec); 0 zone resets 00:13:58.185 slat (usec): min=2, max=107, avg= 3.91, stdev= 2.27 00:13:58.185 clat (usec): min=947, max=4575, avg=1647.36, stdev=305.57 00:13:58.185 lat (usec): min=950, max=4578, avg=1651.27, stdev=306.07 00:13:58.185 clat percentiles (usec): 00:13:58.185 | 1.00th=[ 1090], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1385], 00:13:58.185 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1696], 00:13:58.185 | 70.00th=[ 1778], 80.00th=[ 1876], 90.00th=[ 2040], 95.00th=[ 2180], 00:13:58.185 | 99.00th=[ 2507], 99.50th=[ 2671], 99.90th=[ 3130], 99.95th=[ 3589], 00:13:58.185 | 99.99th=[ 4113] 00:13:58.185 bw ( KiB/s): min=134464, max=164176, per=100.00%, avg=142028.44, stdev=8961.21, samples=9 00:13:58.185 iops : min=33616, max=41044, avg=35507.11, stdev=2240.30, samples=9 00:13:58.185 lat (usec) : 1000=0.04% 00:13:58.185 lat (msec) : 2=88.00%, 4=11.94%, 10=0.02% 00:13:58.185 cpu : usr=57.21%, sys=38.55%, ctx=8, majf=0, minf=763 00:13:58.185 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:58.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.185 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:58.185 issued rwts: total=0,177150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.185 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:58.185 00:13:58.185 Run status group 0 (all jobs): 00:13:58.185 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=692MiB (726MB), run=5002-5002msec 00:13:58.468 ----------------------------------------------------- 00:13:58.468 Suppressions used: 00:13:58.468 count bytes template 00:13:58.468 1 11 /usr/src/fio/parse.c 00:13:58.468 1 8 libtcmalloc_minimal.so 00:13:58.468 1 904 libcrypto.so 00:13:58.468 ----------------------------------------------------- 00:13:58.468 00:13:58.468 ************************************ 00:13:58.468 END TEST xnvme_fio_plugin 00:13:58.468 ************************************ 00:13:58.468 00:13:58.468 real 0m13.806s 00:13:58.468 user 0m8.458s 00:13:58.468 sys 0m4.603s 00:13:58.468 09:46:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.468 09:46:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:58.468 09:46:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:58.468 09:46:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.468 09:46:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.468 09:46:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.468 ************************************ 00:13:58.468 START TEST xnvme_rpc 00:13:58.468 ************************************ 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:58.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70725 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70725 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70725 ']' 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.468 09:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.468 [2024-11-28 09:46:37.254489] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:58.468 [2024-11-28 09:46:37.254844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70725 ] 00:13:58.741 [2024-11-28 09:46:37.417790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.741 [2024-11-28 09:46:37.540813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.685 xnvme_bdev 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70725 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70725 ']' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70725 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70725 00:13:59.685 killing process with pid 70725 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70725' 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70725 00:13:59.685 09:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70725 00:14:01.601 00:14:01.601 real 0m2.921s 00:14:01.601 user 0m2.920s 00:14:01.601 sys 0m0.478s 00:14:01.601 09:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.601 ************************************ 00:14:01.601 END TEST xnvme_rpc 00:14:01.601 ************************************ 00:14:01.601 09:46:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.601 09:46:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:01.601 09:46:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.601 09:46:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.601 09:46:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.601 ************************************ 00:14:01.601 START TEST xnvme_bdevperf 00:14:01.601 ************************************ 00:14:01.601 09:46:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:01.601 09:46:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:01.601 09:46:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:01.602 09:46:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.602 09:46:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:01.602 09:46:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.602 09:46:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.602 09:46:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.602 { 00:14:01.602 "subsystems": [ 00:14:01.602 { 00:14:01.602 "subsystem": "bdev", 00:14:01.602 "config": [ 00:14:01.602 { 00:14:01.602 "params": { 00:14:01.602 "io_mechanism": "io_uring_cmd", 00:14:01.602 "conserve_cpu": false, 00:14:01.602 "filename": "/dev/ng0n1", 00:14:01.602 "name": "xnvme_bdev" 00:14:01.602 }, 00:14:01.602 "method": "bdev_xnvme_create" 00:14:01.602 }, 00:14:01.602 { 00:14:01.602 "method": "bdev_wait_for_examine" 00:14:01.602 } 00:14:01.602 ] 00:14:01.602 } 00:14:01.602 ] 00:14:01.602 } 00:14:01.602 [2024-11-28 09:46:40.224551] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:01.602 [2024-11-28 09:46:40.224860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70799 ] 00:14:01.602 [2024-11-28 09:46:40.389791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.863 [2024-11-28 09:46:40.512440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.125 Running I/O for 5 seconds... 00:14:04.012 39393.00 IOPS, 153.88 MiB/s [2024-11-28T09:46:43.851Z] 36400.50 IOPS, 142.19 MiB/s [2024-11-28T09:46:45.237Z] 35765.67 IOPS, 139.71 MiB/s [2024-11-28T09:46:46.179Z] 35480.25 IOPS, 138.59 MiB/s 00:14:07.299 Latency(us) 00:14:07.299 [2024-11-28T09:46:46.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.299 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:07.299 xnvme_bdev : 5.00 35332.72 138.02 0.00 0.00 1806.94 529.33 5142.06 00:14:07.299 [2024-11-28T09:46:46.179Z] =================================================================================================================== 00:14:07.299 [2024-11-28T09:46:46.179Z] Total : 35332.72 138.02 0.00 0.00 1806.94 529.33 5142.06 00:14:07.871 09:46:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.871 09:46:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:07.871 09:46:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.871 09:46:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.871 09:46:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.871 { 00:14:07.871 "subsystems": [ 00:14:07.871 { 00:14:07.871 "subsystem": "bdev", 00:14:07.871 "config": [ 00:14:07.871 { 00:14:07.871 "params": { 00:14:07.871 "io_mechanism": "io_uring_cmd", 00:14:07.871 "conserve_cpu": false, 00:14:07.871 "filename": "/dev/ng0n1", 00:14:07.871 "name": "xnvme_bdev" 00:14:07.871 }, 00:14:07.871 "method": "bdev_xnvme_create" 00:14:07.871 }, 00:14:07.871 { 00:14:07.871 "method": "bdev_wait_for_examine" 00:14:07.871 } 00:14:07.871 ] 00:14:07.871 } 00:14:07.871 ] 00:14:07.871 } 00:14:07.871 [2024-11-28 09:46:46.685870] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:07.871 [2024-11-28 09:46:46.686211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70873 ] 00:14:08.132 [2024-11-28 09:46:46.850709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.132 [2024-11-28 09:46:46.969345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.394 Running I/O for 5 seconds... 00:14:10.720 36633.00 IOPS, 143.10 MiB/s [2024-11-28T09:46:50.543Z] 36624.00 IOPS, 143.06 MiB/s [2024-11-28T09:46:51.486Z] 36172.33 IOPS, 141.30 MiB/s [2024-11-28T09:46:52.428Z] 36337.25 IOPS, 141.94 MiB/s 00:14:13.548 Latency(us) 00:14:13.548 [2024-11-28T09:46:52.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.548 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:13.548 xnvme_bdev : 5.00 36124.05 141.11 0.00 0.00 1766.78 329.26 6074.68 00:14:13.548 [2024-11-28T09:46:52.428Z] =================================================================================================================== 00:14:13.548 [2024-11-28T09:46:52.428Z] Total : 36124.05 141.11 0.00 0.00 1766.78 329.26 6074.68 00:14:14.492 09:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.492 09:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:14.492 09:46:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:14.492 09:46:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.492 09:46:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.492 { 00:14:14.492 "subsystems": [ 00:14:14.492 { 00:14:14.492 "subsystem": "bdev", 00:14:14.492 "config": [ 00:14:14.492 { 00:14:14.492 "params": { 00:14:14.492 "io_mechanism": "io_uring_cmd", 00:14:14.492 "conserve_cpu": false, 00:14:14.492 "filename": "/dev/ng0n1", 00:14:14.492 "name": "xnvme_bdev" 00:14:14.492 }, 00:14:14.492 "method": "bdev_xnvme_create" 00:14:14.492 }, 00:14:14.492 { 00:14:14.492 "method": "bdev_wait_for_examine" 00:14:14.492 } 00:14:14.492 ] 00:14:14.492 } 00:14:14.492 ] 00:14:14.492 } 00:14:14.492 [2024-11-28 09:46:53.095627] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:14.492 [2024-11-28 09:46:53.095770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70942 ] 00:14:14.492 [2024-11-28 09:46:53.257663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.753 [2024-11-28 09:46:53.378786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.014 Running I/O for 5 seconds... 00:14:16.901 79616.00 IOPS, 311.00 MiB/s [2024-11-28T09:46:56.725Z] 79808.00 IOPS, 311.75 MiB/s [2024-11-28T09:46:58.111Z] 79957.33 IOPS, 312.33 MiB/s [2024-11-28T09:46:58.685Z] 81120.00 IOPS, 316.88 MiB/s 00:14:19.806 Latency(us) 00:14:19.806 [2024-11-28T09:46:58.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.806 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:19.806 xnvme_bdev : 5.00 81389.27 317.93 0.00 0.00 782.97 529.33 2646.65 00:14:19.806 [2024-11-28T09:46:58.686Z] =================================================================================================================== 00:14:19.806 [2024-11-28T09:46:58.686Z] Total : 81389.27 317.93 0.00 0.00 782.97 529.33 2646.65 00:14:20.745 09:46:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.745 09:46:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:20.745 09:46:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.745 09:46:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.745 09:46:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.745 { 00:14:20.745 "subsystems": [ 00:14:20.745 { 00:14:20.745 "subsystem": "bdev", 00:14:20.745 "config": [ 00:14:20.745 { 00:14:20.745 "params": { 00:14:20.745 "io_mechanism": "io_uring_cmd", 00:14:20.745 "conserve_cpu": false, 00:14:20.745 "filename": "/dev/ng0n1", 00:14:20.745 "name": "xnvme_bdev" 00:14:20.745 }, 00:14:20.745 "method": "bdev_xnvme_create" 00:14:20.745 }, 00:14:20.745 { 00:14:20.745 "method": "bdev_wait_for_examine" 00:14:20.745 } 00:14:20.745 ] 00:14:20.745 } 00:14:20.745 ] 00:14:20.745 } 00:14:20.745 [2024-11-28 09:46:59.440493] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:20.745 [2024-11-28 09:46:59.440611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:14:20.745 [2024-11-28 09:46:59.597281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.005 [2024-11-28 09:46:59.688299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.005 Running I/O for 5 seconds... 00:14:23.332 11160.00 IOPS, 43.59 MiB/s [2024-11-28T09:47:03.150Z] 23491.50 IOPS, 91.76 MiB/s [2024-11-28T09:47:04.087Z] 23009.67 IOPS, 89.88 MiB/s [2024-11-28T09:47:05.026Z] 21613.25 IOPS, 84.43 MiB/s [2024-11-28T09:47:05.026Z] 21767.60 IOPS, 85.03 MiB/s 00:14:26.146 Latency(us) 00:14:26.146 [2024-11-28T09:47:05.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.146 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:26.146 xnvme_bdev : 5.00 21769.75 85.04 0.00 0.00 2935.79 63.41 780785.82 00:14:26.146 [2024-11-28T09:47:05.026Z] =================================================================================================================== 00:14:26.146 [2024-11-28T09:47:05.026Z] Total : 21769.75 85.04 0.00 0.00 2935.79 63.41 780785.82 00:14:27.112 00:14:27.112 real 0m25.511s 00:14:27.112 user 0m14.306s 00:14:27.112 sys 0m10.679s 00:14:27.112 09:47:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.112 09:47:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.112 ************************************ 00:14:27.112 END TEST xnvme_bdevperf 00:14:27.112 ************************************ 00:14:27.112 09:47:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:27.112 09:47:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.112 09:47:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.112 09:47:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.112 ************************************ 00:14:27.112 START TEST xnvme_fio_plugin 00:14:27.112 ************************************ 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:27.112 09:47:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:27.112 { 00:14:27.112 "subsystems": [ 00:14:27.112 { 00:14:27.112 "subsystem": "bdev", 00:14:27.112 "config": [ 00:14:27.112 { 00:14:27.112 "params": { 00:14:27.112 "io_mechanism": "io_uring_cmd", 00:14:27.112 "conserve_cpu": false, 00:14:27.112 "filename": "/dev/ng0n1", 00:14:27.112 "name": "xnvme_bdev" 00:14:27.112 }, 00:14:27.112 "method": "bdev_xnvme_create" 00:14:27.112 }, 00:14:27.112 { 00:14:27.112 "method": "bdev_wait_for_examine" 00:14:27.112 } 00:14:27.112 ] 00:14:27.112 } 00:14:27.112 ] 00:14:27.112 } 00:14:27.112 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:27.112 fio-3.35 00:14:27.112 Starting 1 thread 00:14:33.747 00:14:33.747 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71135: Thu Nov 28 09:47:11 2024 00:14:33.747 read: IOPS=34.6k, BW=135MiB/s (142MB/s)(677MiB/5002msec) 00:14:33.747 slat (usec): min=2, max=112, avg= 3.69, stdev= 2.17 00:14:33.747 clat (usec): min=858, max=3601, avg=1696.68, stdev=322.71 00:14:33.747 lat (usec): min=861, max=3635, avg=1700.37, stdev=323.04 00:14:33.747 clat percentiles (usec): 00:14:33.747 | 1.00th=[ 1123], 5.00th=[ 1254], 10.00th=[ 1319], 20.00th=[ 1418], 00:14:33.747 | 30.00th=[ 1500], 40.00th=[ 1582], 50.00th=[ 1663], 60.00th=[ 1745], 00:14:33.747 | 70.00th=[ 1827], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:14:33.747 | 99.00th=[ 2638], 99.50th=[ 2835], 99.90th=[ 3228], 99.95th=[ 3359], 00:14:33.747 | 99.99th=[ 3523] 00:14:33.747 bw ( KiB/s): min=135680, max=140800, per=99.66%, avg=138126.22, stdev=1693.82, samples=9 00:14:33.747 iops : min=33920, max=35200, avg=34531.56, stdev=423.45, samples=9 00:14:33.747 lat (usec) : 1000=0.11% 00:14:33.747 lat (msec) : 2=83.96%, 4=15.93% 00:14:33.747 cpu : usr=36.89%, sys=61.83%, ctx=17, majf=0, minf=762 00:14:33.747 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:33.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.747 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:33.747 issued rwts: total=173312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:33.747 00:14:33.747 Run status group 0 (all jobs): 00:14:33.747 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=677MiB (710MB), run=5002-5002msec 00:14:33.747 ----------------------------------------------------- 00:14:33.747 Suppressions used: 00:14:33.747 count bytes template 00:14:33.747 1 11 /usr/src/fio/parse.c 00:14:33.747 1 8 libtcmalloc_minimal.so 00:14:33.747 1 904 libcrypto.so 00:14:33.747 ----------------------------------------------------- 00:14:33.747 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.009 09:47:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.009 { 00:14:34.009 "subsystems": [ 00:14:34.009 { 00:14:34.009 "subsystem": "bdev", 00:14:34.009 "config": [ 00:14:34.009 { 00:14:34.009 "params": { 00:14:34.009 "io_mechanism": "io_uring_cmd", 00:14:34.009 "conserve_cpu": false, 00:14:34.009 "filename": "/dev/ng0n1", 00:14:34.009 "name": "xnvme_bdev" 00:14:34.009 }, 00:14:34.009 "method": "bdev_xnvme_create" 00:14:34.009 }, 00:14:34.009 { 00:14:34.009 "method": "bdev_wait_for_examine" 00:14:34.009 } 00:14:34.009 ] 00:14:34.009 } 00:14:34.009 ] 00:14:34.009 } 00:14:34.009 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:34.009 fio-3.35 00:14:34.009 Starting 1 thread 00:14:40.600 00:14:40.600 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71226: Thu Nov 28 09:47:18 2024 00:14:40.600 write: IOPS=7253, BW=28.3MiB/s (29.7MB/s)(142MiB/5016msec); 0 zone resets 00:14:40.600 slat (nsec): min=2908, max=86660, avg=4125.56, stdev=3041.84 00:14:40.600 clat (usec): min=60, max=32387, avg=8801.84, stdev=8948.97 00:14:40.601 lat (usec): min=63, max=32391, avg=8805.97, stdev=8949.02 00:14:40.601 clat percentiles (usec): 00:14:40.601 | 1.00th=[ 113], 5.00th=[ 172], 10.00th=[ 297], 20.00th=[ 420], 00:14:40.601 | 30.00th=[ 603], 40.00th=[ 750], 50.00th=[ 1336], 60.00th=[15270], 00:14:40.601 | 70.00th=[16909], 80.00th=[18220], 90.00th=[20055], 95.00th=[21365], 00:14:40.601 | 99.00th=[25035], 99.50th=[27395], 99.90th=[30802], 99.95th=[31589], 00:14:40.601 | 99.99th=[32113] 00:14:40.601 bw ( KiB/s): min=26900, max=31800, per=100.00%, avg=29040.90, stdev=1415.20, samples=10 00:14:40.601 iops : min= 6725, max= 7950, avg=7260.20, stdev=353.80, samples=10 00:14:40.601 lat (usec) : 100=0.44%, 250=7.86%, 500=17.15%, 750=14.65%, 1000=6.84% 00:14:40.601 lat (msec) : 2=5.66%, 4=0.39%, 10=0.16%, 20=36.79%, 50=10.07% 00:14:40.601 cpu : usr=31.70%, sys=67.50%, ctx=9, majf=0, minf=763 00:14:40.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=69.2%, >=64=30.4% 00:14:40.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.601 complete : 0=0.0%, 4=94.9%, 8=3.5%, 16=1.4%, 32=0.1%, 64=0.1%, >=64=0.0% 00:14:40.601 issued rwts: total=0,36384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:40.601 00:14:40.601 Run status group 0 (all jobs): 00:14:40.601 WRITE: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=142MiB (149MB), run=5016-5016msec 00:14:40.862 ----------------------------------------------------- 00:14:40.862 Suppressions used: 00:14:40.862 count bytes template 00:14:40.862 1 11 /usr/src/fio/parse.c 00:14:40.862 1 8 libtcmalloc_minimal.so 00:14:40.862 1 904 libcrypto.so 00:14:40.862 ----------------------------------------------------- 00:14:40.862 00:14:40.862 00:14:40.862 real 0m13.772s 00:14:40.862 user 0m6.284s 00:14:40.862 sys 0m7.066s 00:14:40.862 09:47:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.862 ************************************ 00:14:40.862 END TEST xnvme_fio_plugin 00:14:40.862 ************************************ 00:14:40.862 09:47:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:40.862 09:47:19 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:40.862 09:47:19 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:40.862 09:47:19 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:40.862 09:47:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:40.862 09:47:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:40.862 09:47:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.862 09:47:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.862 ************************************ 00:14:40.862 START TEST xnvme_rpc 00:14:40.862 ************************************ 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:40.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71310 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71310 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71310 ']' 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.862 09:47:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.862 [2024-11-28 09:47:19.661575] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:40.862 [2024-11-28 09:47:19.661909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:14:41.124 [2024-11-28 09:47:19.825686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.124 [2024-11-28 09:47:19.946314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.068 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.069 xnvme_bdev 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71310 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71310 ']' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71310 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71310 00:14:42.069 killing process with pid 71310 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71310' 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71310 00:14:42.069 09:47:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71310 00:14:43.985 ************************************ 00:14:43.985 END TEST xnvme_rpc 00:14:43.985 ************************************ 00:14:43.985 00:14:43.985 real 0m2.903s 00:14:43.985 user 0m2.919s 00:14:43.985 sys 0m0.457s 00:14:43.985 09:47:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.985 09:47:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.985 09:47:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:43.985 09:47:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:43.985 09:47:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.985 09:47:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.985 ************************************ 00:14:43.985 START TEST xnvme_bdevperf 00:14:43.985 ************************************ 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.985 09:47:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.985 { 00:14:43.985 "subsystems": [ 00:14:43.985 { 00:14:43.985 "subsystem": "bdev", 00:14:43.985 "config": [ 00:14:43.985 { 00:14:43.985 "params": { 00:14:43.985 "io_mechanism": "io_uring_cmd", 00:14:43.985 "conserve_cpu": true, 00:14:43.985 "filename": "/dev/ng0n1", 00:14:43.985 "name": "xnvme_bdev" 00:14:43.985 }, 00:14:43.985 "method": "bdev_xnvme_create" 00:14:43.985 }, 00:14:43.985 { 00:14:43.985 "method": "bdev_wait_for_examine" 00:14:43.985 } 00:14:43.985 ] 00:14:43.985 } 00:14:43.985 ] 00:14:43.985 } 00:14:43.985 [2024-11-28 09:47:22.614720] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:43.985 [2024-11-28 09:47:22.614863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71380 ] 00:14:43.985 [2024-11-28 09:47:22.779132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.247 [2024-11-28 09:47:22.905727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.507 Running I/O for 5 seconds... 00:14:46.392 34688.00 IOPS, 135.50 MiB/s [2024-11-28T09:47:26.215Z] 37005.00 IOPS, 144.55 MiB/s [2024-11-28T09:47:27.601Z] 36410.67 IOPS, 142.23 MiB/s [2024-11-28T09:47:28.545Z] 36196.50 IOPS, 141.39 MiB/s 00:14:49.665 Latency(us) 00:14:49.665 [2024-11-28T09:47:28.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.665 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:49.665 xnvme_bdev : 5.00 37843.86 147.83 0.00 0.00 1687.11 674.26 10334.52 00:14:49.665 [2024-11-28T09:47:28.545Z] =================================================================================================================== 00:14:49.665 [2024-11-28T09:47:28.545Z] Total : 37843.86 147.83 0.00 0.00 1687.11 674.26 10334.52 00:14:50.238 09:47:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.238 09:47:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:50.238 09:47:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.238 09:47:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.238 09:47:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.238 { 00:14:50.238 "subsystems": [ 00:14:50.238 { 00:14:50.238 "subsystem": "bdev", 00:14:50.238 "config": [ 00:14:50.238 { 00:14:50.238 "params": { 00:14:50.238 "io_mechanism": "io_uring_cmd", 00:14:50.238 "conserve_cpu": true, 00:14:50.238 "filename": "/dev/ng0n1", 00:14:50.238 "name": "xnvme_bdev" 00:14:50.238 }, 00:14:50.238 "method": "bdev_xnvme_create" 00:14:50.238 }, 00:14:50.238 { 00:14:50.238 "method": "bdev_wait_for_examine" 00:14:50.238 } 00:14:50.238 ] 00:14:50.238 } 00:14:50.238 ] 00:14:50.238 } 00:14:50.238 [2024-11-28 09:47:29.064207] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:50.238 [2024-11-28 09:47:29.064385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:14:50.499 [2024-11-28 09:47:29.232185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.499 [2024-11-28 09:47:29.344286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.760 Running I/O for 5 seconds... 00:14:53.088 42446.00 IOPS, 165.80 MiB/s [2024-11-28T09:47:32.912Z] 43540.50 IOPS, 170.08 MiB/s [2024-11-28T09:47:33.857Z] 44322.67 IOPS, 173.14 MiB/s [2024-11-28T09:47:34.800Z] 43610.75 IOPS, 170.35 MiB/s 00:14:55.920 Latency(us) 00:14:55.920 [2024-11-28T09:47:34.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.920 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:55.920 xnvme_bdev : 5.00 42722.30 166.88 0.00 0.00 1493.76 475.77 5696.59 00:14:55.920 [2024-11-28T09:47:34.800Z] =================================================================================================================== 00:14:55.920 [2024-11-28T09:47:34.800Z] Total : 42722.30 166.88 0.00 0.00 1493.76 475.77 5696.59 00:14:56.862 09:47:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.862 09:47:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:56.862 09:47:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:56.862 09:47:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:56.862 09:47:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.862 { 00:14:56.862 "subsystems": [ 00:14:56.862 { 00:14:56.862 "subsystem": "bdev", 00:14:56.862 "config": [ 00:14:56.862 { 00:14:56.862 "params": { 00:14:56.862 "io_mechanism": "io_uring_cmd", 00:14:56.862 "conserve_cpu": true, 00:14:56.862 "filename": "/dev/ng0n1", 00:14:56.862 "name": "xnvme_bdev" 00:14:56.862 }, 00:14:56.862 "method": "bdev_xnvme_create" 00:14:56.862 }, 00:14:56.862 { 00:14:56.862 "method": "bdev_wait_for_examine" 00:14:56.862 } 00:14:56.862 ] 00:14:56.862 } 00:14:56.862 ] 00:14:56.862 } 00:14:56.862 [2024-11-28 09:47:35.513927] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:56.862 [2024-11-28 09:47:35.514071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71534 ] 00:14:56.862 [2024-11-28 09:47:35.678766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.124 [2024-11-28 09:47:35.799422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.386 Running I/O for 5 seconds... 00:14:59.277 80064.00 IOPS, 312.75 MiB/s [2024-11-28T09:47:39.103Z] 80384.00 IOPS, 314.00 MiB/s [2024-11-28T09:47:40.484Z] 80384.00 IOPS, 314.00 MiB/s [2024-11-28T09:47:41.121Z] 81328.00 IOPS, 317.69 MiB/s [2024-11-28T09:47:41.121Z] 84505.60 IOPS, 330.10 MiB/s 00:15:02.241 Latency(us) 00:15:02.241 [2024-11-28T09:47:41.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.241 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:02.241 xnvme_bdev : 5.00 84471.89 329.97 0.00 0.00 754.24 397.00 4713.55 00:15:02.241 [2024-11-28T09:47:41.121Z] =================================================================================================================== 00:15:02.241 [2024-11-28T09:47:41.121Z] Total : 84471.89 329.97 0.00 0.00 754.24 397.00 4713.55 00:15:02.809 09:47:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.809 09:47:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.809 09:47:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:02.809 09:47:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.809 09:47:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.809 { 00:15:02.809 "subsystems": [ 00:15:02.809 { 00:15:02.809 "subsystem": "bdev", 00:15:02.809 "config": [ 00:15:02.809 { 00:15:02.809 "params": { 00:15:02.809 "io_mechanism": "io_uring_cmd", 00:15:02.809 "conserve_cpu": true, 00:15:02.809 "filename": "/dev/ng0n1", 00:15:02.809 "name": "xnvme_bdev" 00:15:02.809 }, 00:15:02.809 "method": "bdev_xnvme_create" 00:15:02.809 }, 00:15:02.809 { 00:15:02.809 "method": "bdev_wait_for_examine" 00:15:02.809 } 00:15:02.809 ] 00:15:02.809 } 00:15:02.809 ] 00:15:02.809 } 00:15:03.067 [2024-11-28 09:47:41.720116] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:03.067 [2024-11-28 09:47:41.720254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71603 ] 00:15:03.067 [2024-11-28 09:47:41.878688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.325 [2024-11-28 09:47:41.959062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.325 Running I/O for 5 seconds... 00:15:05.639 52859.00 IOPS, 206.48 MiB/s [2024-11-28T09:47:45.461Z] 50350.00 IOPS, 196.68 MiB/s [2024-11-28T09:47:46.405Z] 41370.33 IOPS, 161.60 MiB/s [2024-11-28T09:47:47.346Z] 35627.25 IOPS, 139.17 MiB/s [2024-11-28T09:47:47.346Z] 32134.00 IOPS, 125.52 MiB/s 00:15:08.466 Latency(us) 00:15:08.466 [2024-11-28T09:47:47.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.466 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:08.466 xnvme_bdev : 5.01 32107.22 125.42 0.00 0.00 1988.43 64.20 29037.49 00:15:08.466 [2024-11-28T09:47:47.347Z] =================================================================================================================== 00:15:08.467 [2024-11-28T09:47:47.347Z] Total : 32107.22 125.42 0.00 0.00 1988.43 64.20 29037.49 00:15:09.412 00:15:09.412 real 0m25.416s 00:15:09.412 user 0m17.519s 00:15:09.412 sys 0m5.886s 00:15:09.412 ************************************ 00:15:09.412 END TEST xnvme_bdevperf 00:15:09.412 ************************************ 00:15:09.412 09:47:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.412 09:47:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.412 09:47:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:09.412 09:47:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.412 09:47:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.412 09:47:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.412 ************************************ 00:15:09.412 START TEST xnvme_fio_plugin 00:15:09.412 ************************************ 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:09.412 09:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.412 { 00:15:09.412 "subsystems": [ 00:15:09.412 { 00:15:09.412 "subsystem": "bdev", 00:15:09.412 "config": [ 00:15:09.412 { 00:15:09.412 "params": { 00:15:09.412 "io_mechanism": "io_uring_cmd", 00:15:09.412 "conserve_cpu": true, 00:15:09.412 "filename": "/dev/ng0n1", 00:15:09.412 "name": "xnvme_bdev" 00:15:09.412 }, 00:15:09.412 "method": "bdev_xnvme_create" 00:15:09.412 }, 00:15:09.412 { 00:15:09.412 "method": "bdev_wait_for_examine" 00:15:09.412 } 00:15:09.412 ] 00:15:09.412 } 00:15:09.412 ] 00:15:09.412 } 00:15:09.412 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:09.412 fio-3.35 00:15:09.412 Starting 1 thread 00:15:16.007 00:15:16.007 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71721: Thu Nov 28 09:47:53 2024 00:15:16.007 read: IOPS=37.2k, BW=145MiB/s (152MB/s)(727MiB/5003msec) 00:15:16.007 slat (nsec): min=2878, max=89552, avg=3518.99, stdev=1871.07 00:15:16.007 clat (usec): min=865, max=5081, avg=1576.44, stdev=305.15 00:15:16.007 lat (usec): min=868, max=5093, avg=1579.95, stdev=305.53 00:15:16.007 clat percentiles (usec): 00:15:16.007 | 1.00th=[ 1037], 5.00th=[ 1139], 10.00th=[ 1221], 20.00th=[ 1319], 00:15:16.007 | 30.00th=[ 1401], 40.00th=[ 1467], 50.00th=[ 1549], 60.00th=[ 1631], 00:15:16.007 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1975], 95.00th=[ 2114], 00:15:16.007 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2933], 99.95th=[ 3752], 00:15:16.007 | 99.99th=[ 5014] 00:15:16.007 bw ( KiB/s): min=132343, max=172544, per=100.00%, avg=149986.56, stdev=12967.14, samples=9 00:15:16.007 iops : min=33085, max=43136, avg=37496.56, stdev=3241.91, samples=9 00:15:16.007 lat (usec) : 1000=0.44% 00:15:16.007 lat (msec) : 2=90.81%, 4=8.71%, 10=0.04% 00:15:16.007 cpu : usr=60.56%, sys=36.21%, ctx=7, majf=0, minf=762 00:15:16.007 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:16.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.007 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:16.007 issued rwts: total=186112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.007 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:16.007 00:15:16.007 Run status group 0 (all jobs): 00:15:16.007 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=727MiB (762MB), run=5003-5003msec 00:15:16.269 ----------------------------------------------------- 00:15:16.269 Suppressions used: 00:15:16.269 count bytes template 00:15:16.269 1 11 /usr/src/fio/parse.c 00:15:16.269 1 8 libtcmalloc_minimal.so 00:15:16.269 1 904 libcrypto.so 00:15:16.269 ----------------------------------------------------- 00:15:16.269 00:15:16.269 09:47:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:16.269 09:47:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.269 09:47:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:16.269 09:47:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:16.269 09:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.269 { 00:15:16.269 "subsystems": [ 00:15:16.269 { 00:15:16.269 "subsystem": "bdev", 00:15:16.269 "config": [ 00:15:16.269 { 00:15:16.269 "params": { 00:15:16.269 "io_mechanism": "io_uring_cmd", 00:15:16.269 "conserve_cpu": true, 00:15:16.269 "filename": "/dev/ng0n1", 00:15:16.269 "name": "xnvme_bdev" 00:15:16.269 }, 00:15:16.269 "method": "bdev_xnvme_create" 00:15:16.269 }, 00:15:16.269 { 00:15:16.269 "method": "bdev_wait_for_examine" 00:15:16.269 } 00:15:16.269 ] 00:15:16.269 } 00:15:16.269 ] 00:15:16.269 } 00:15:16.530 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:16.530 fio-3.35 00:15:16.530 Starting 1 thread 00:15:23.117 00:15:23.117 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71813: Thu Nov 28 09:48:00 2024 00:15:23.117 write: IOPS=41.9k, BW=164MiB/s (172MB/s)(819MiB/5001msec); 0 zone resets 00:15:23.117 slat (usec): min=2, max=238, avg= 4.19, stdev= 2.26 00:15:23.117 clat (usec): min=422, max=6067, avg=1366.88, stdev=275.94 00:15:23.117 lat (usec): min=425, max=6071, avg=1371.07, stdev=276.30 00:15:23.117 clat percentiles (usec): 00:15:23.117 | 1.00th=[ 938], 5.00th=[ 1029], 10.00th=[ 1074], 20.00th=[ 1156], 00:15:23.117 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1319], 60.00th=[ 1385], 00:15:23.117 | 70.00th=[ 1450], 80.00th=[ 1549], 90.00th=[ 1713], 95.00th=[ 1860], 00:15:23.117 | 99.00th=[ 2245], 99.50th=[ 2442], 99.90th=[ 3359], 99.95th=[ 3818], 00:15:23.117 | 99.99th=[ 4686] 00:15:23.117 bw ( KiB/s): min=161008, max=174192, per=99.95%, avg=167629.33, stdev=4954.97, samples=9 00:15:23.117 iops : min=40252, max=43548, avg=41907.33, stdev=1238.74, samples=9 00:15:23.117 lat (usec) : 500=0.01%, 750=0.04%, 1000=3.30% 00:15:23.117 lat (msec) : 2=94.21%, 4=2.41%, 10=0.04% 00:15:23.117 cpu : usr=64.48%, sys=30.40%, ctx=16, majf=0, minf=763 00:15:23.117 IO depths : 1=1.4%, 2=2.9%, 4=6.0%, 8=12.3%, 16=25.0%, 32=50.7%, >=64=1.7% 00:15:23.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.117 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:23.117 issued rwts: total=0,209677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.117 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:23.117 00:15:23.117 Run status group 0 (all jobs): 00:15:23.117 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=819MiB (859MB), run=5001-5001msec 00:15:23.117 ----------------------------------------------------- 00:15:23.117 Suppressions used: 00:15:23.117 count bytes template 00:15:23.117 1 11 /usr/src/fio/parse.c 00:15:23.117 1 8 libtcmalloc_minimal.so 00:15:23.117 1 904 libcrypto.so 00:15:23.117 ----------------------------------------------------- 00:15:23.117 00:15:23.117 00:15:23.117 real 0m13.925s 00:15:23.117 user 0m9.135s 00:15:23.117 sys 0m4.033s 00:15:23.117 ************************************ 00:15:23.117 END TEST xnvme_fio_plugin 00:15:23.117 ************************************ 00:15:23.117 09:48:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.117 09:48:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:23.379 Process with pid 71310 is not found 00:15:23.379 09:48:02 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71310 00:15:23.379 09:48:02 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71310 ']' 00:15:23.379 09:48:02 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71310 00:15:23.379 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71310) - No such process 00:15:23.379 09:48:02 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71310 is not found' 00:15:23.379 00:15:23.379 real 3m31.147s 00:15:23.379 user 2m4.339s 00:15:23.379 sys 1m12.266s 00:15:23.379 ************************************ 00:15:23.379 END TEST nvme_xnvme 00:15:23.379 ************************************ 00:15:23.379 09:48:02 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.379 09:48:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.379 09:48:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:23.379 09:48:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.379 09:48:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.379 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:15:23.379 ************************************ 00:15:23.379 START TEST blockdev_xnvme 00:15:23.379 ************************************ 00:15:23.379 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:23.379 * Looking for test storage... 00:15:23.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:23.379 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.379 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.379 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.379 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.379 09:48:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.380 09:48:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.380 --rc genhtml_branch_coverage=1 00:15:23.380 --rc genhtml_function_coverage=1 00:15:23.380 --rc genhtml_legend=1 00:15:23.380 --rc geninfo_all_blocks=1 00:15:23.380 --rc geninfo_unexecuted_blocks=1 00:15:23.380 00:15:23.380 ' 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.380 --rc genhtml_branch_coverage=1 00:15:23.380 --rc genhtml_function_coverage=1 00:15:23.380 --rc genhtml_legend=1 00:15:23.380 --rc geninfo_all_blocks=1 00:15:23.380 --rc geninfo_unexecuted_blocks=1 00:15:23.380 00:15:23.380 ' 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.380 --rc genhtml_branch_coverage=1 00:15:23.380 --rc genhtml_function_coverage=1 00:15:23.380 --rc genhtml_legend=1 00:15:23.380 --rc geninfo_all_blocks=1 00:15:23.380 --rc geninfo_unexecuted_blocks=1 00:15:23.380 00:15:23.380 ' 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.380 --rc genhtml_branch_coverage=1 00:15:23.380 --rc genhtml_function_coverage=1 00:15:23.380 --rc genhtml_legend=1 00:15:23.380 --rc geninfo_all_blocks=1 00:15:23.380 --rc geninfo_unexecuted_blocks=1 00:15:23.380 00:15:23.380 ' 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71947 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71947 00:15:23.380 09:48:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71947 ']' 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.380 09:48:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.642 [2024-11-28 09:48:02.323280] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:23.642 [2024-11-28 09:48:02.323658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71947 ] 00:15:23.642 [2024-11-28 09:48:02.488268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.903 [2024-11-28 09:48:02.611215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.476 09:48:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.476 09:48:03 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:24.476 09:48:03 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:24.476 09:48:03 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:24.476 09:48:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:24.476 09:48:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:24.476 09:48:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:25.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.625 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:25.625 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:25.625 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:25.625 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:25.625 09:48:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.625 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.626 nvme0n1 00:15:25.626 nvme0n2 00:15:25.626 nvme0n3 00:15:25.626 nvme1n1 00:15:25.626 nvme2n1 00:15:25.626 nvme3n1 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.626 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.626 09:48:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.888 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:25.888 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:25.888 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:25.888 09:48:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.888 09:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.888 09:48:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.888 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:25.888 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:25.889 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "317ccd10-2359-4d07-94cc-ae57646d54bd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "317ccd10-2359-4d07-94cc-ae57646d54bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "cd92664d-ec14-4861-9cb3-8a7534862339"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd92664d-ec14-4861-9cb3-8a7534862339",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "e89038e9-c663-43b0-b946-0dff474c1469"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e89038e9-c663-43b0-b946-0dff474c1469",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2643570d-3fb4-4117-bf49-d8c9c4b8297e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2643570d-3fb4-4117-bf49-d8c9c4b8297e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d41c30af-1a36-4880-a15e-cffc3a6d3691"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d41c30af-1a36-4880-a15e-cffc3a6d3691",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5eafa407-5fd9-40be-ace3-eded51f493cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5eafa407-5fd9-40be-ace3-eded51f493cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:25.889 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:25.889 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:25.889 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:25.889 09:48:04 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71947 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71947 ']' 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71947 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71947 00:15:25.889 killing process with pid 71947 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71947' 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71947 00:15:25.889 09:48:04 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71947 00:15:27.807 09:48:06 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:27.807 09:48:06 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:27.807 09:48:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:27.807 09:48:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.807 09:48:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.807 ************************************ 00:15:27.807 START TEST bdev_hello_world 00:15:27.807 ************************************ 00:15:27.807 09:48:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:27.807 [2024-11-28 09:48:06.371284] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:27.807 [2024-11-28 09:48:06.371431] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72231 ] 00:15:27.807 [2024-11-28 09:48:06.535961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.807 [2024-11-28 09:48:06.654503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.380 [2024-11-28 09:48:07.061659] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:28.380 [2024-11-28 09:48:07.061724] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:28.380 [2024-11-28 09:48:07.061743] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:28.380 [2024-11-28 09:48:07.063865] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:28.380 [2024-11-28 09:48:07.066374] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:28.380 [2024-11-28 09:48:07.066420] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:28.380 [2024-11-28 09:48:07.067121] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:28.380 00:15:28.380 [2024-11-28 09:48:07.067170] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:29.323 ************************************ 00:15:29.323 END TEST bdev_hello_world 00:15:29.323 ************************************ 00:15:29.323 00:15:29.323 real 0m1.558s 00:15:29.323 user 0m1.184s 00:15:29.324 sys 0m0.223s 00:15:29.324 09:48:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.324 09:48:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:29.324 09:48:07 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:29.324 09:48:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.324 09:48:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.324 09:48:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.324 ************************************ 00:15:29.324 START TEST bdev_bounds 00:15:29.324 ************************************ 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72262 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.324 Process bdevio pid: 72262 00:15:29.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72262' 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72262 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72262 ']' 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.324 09:48:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:29.324 [2024-11-28 09:48:08.003472] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:29.324 [2024-11-28 09:48:08.003621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72262 ] 00:15:29.324 [2024-11-28 09:48:08.166237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.586 [2024-11-28 09:48:08.290297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.586 [2024-11-28 09:48:08.290479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.586 [2024-11-28 09:48:08.290503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.160 09:48:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.160 09:48:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:30.160 09:48:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:30.160 I/O targets: 00:15:30.160 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.160 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.160 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.160 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:30.160 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:30.160 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:30.160 00:15:30.160 00:15:30.160 CUnit - A unit testing framework for C - Version 2.1-3 00:15:30.160 http://cunit.sourceforge.net/ 00:15:30.160 00:15:30.160 00:15:30.160 Suite: bdevio tests on: nvme3n1 00:15:30.160 Test: blockdev write read block ...passed 00:15:30.160 Test: blockdev write zeroes read block ...passed 00:15:30.160 Test: blockdev write zeroes read no split ...passed 00:15:30.160 Test: blockdev write zeroes read split ...passed 00:15:30.160 Test: blockdev write zeroes read split partial ...passed 00:15:30.160 Test: blockdev reset ...passed 00:15:30.160 Test: blockdev write read 8 blocks ...passed 00:15:30.160 Test: blockdev write read size > 128k ...passed 00:15:30.160 Test: blockdev write read invalid size ...passed 00:15:30.160 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.160 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.160 Test: blockdev write read max offset ...passed 00:15:30.160 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.160 Test: blockdev writev readv 8 blocks ...passed 00:15:30.160 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.160 Test: blockdev writev readv block ...passed 00:15:30.160 Test: blockdev writev readv size > 128k ...passed 00:15:30.160 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.160 Test: blockdev comparev and writev ...passed 00:15:30.160 Test: blockdev nvme passthru rw ...passed 00:15:30.160 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.160 Test: blockdev nvme admin passthru ...passed 00:15:30.160 Test: blockdev copy ...passed 00:15:30.160 Suite: bdevio tests on: nvme2n1 00:15:30.160 Test: blockdev write read block ...passed 00:15:30.423 Test: blockdev write zeroes read block ...passed 00:15:30.423 Test: blockdev write zeroes read no split ...passed 00:15:30.423 Test: blockdev write zeroes read split ...passed 00:15:30.423 Test: blockdev write zeroes read split partial ...passed 00:15:30.423 Test: blockdev reset ...passed 00:15:30.423 Test: blockdev write read 8 blocks ...passed 00:15:30.423 Test: blockdev write read size > 128k ...passed 00:15:30.423 Test: blockdev write read invalid size ...passed 00:15:30.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.423 Test: blockdev write read max offset ...passed 00:15:30.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.423 Test: blockdev writev readv 8 blocks ...passed 00:15:30.423 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.423 Test: blockdev writev readv block ...passed 00:15:30.423 Test: blockdev writev readv size > 128k ...passed 00:15:30.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.423 Test: blockdev comparev and writev ...passed 00:15:30.423 Test: blockdev nvme passthru rw ...passed 00:15:30.423 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.423 Test: blockdev nvme admin passthru ...passed 00:15:30.423 Test: blockdev copy ...passed 00:15:30.423 Suite: bdevio tests on: nvme1n1 00:15:30.423 Test: blockdev write read block ...passed 00:15:30.423 Test: blockdev write zeroes read block ...passed 00:15:30.423 Test: blockdev write zeroes read no split ...passed 00:15:30.423 Test: blockdev write zeroes read split ...passed 00:15:30.423 Test: blockdev write zeroes read split partial ...passed 00:15:30.423 Test: blockdev reset ...passed 00:15:30.423 Test: blockdev write read 8 blocks ...passed 00:15:30.423 Test: blockdev write read size > 128k ...passed 00:15:30.423 Test: blockdev write read invalid size ...passed 00:15:30.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.423 Test: blockdev write read max offset ...passed 00:15:30.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.423 Test: blockdev writev readv 8 blocks ...passed 00:15:30.423 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.423 Test: blockdev writev readv block ...passed 00:15:30.423 Test: blockdev writev readv size > 128k ...passed 00:15:30.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.423 Test: blockdev comparev and writev ...passed 00:15:30.423 Test: blockdev nvme passthru rw ...passed 00:15:30.423 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.423 Test: blockdev nvme admin passthru ...passed 00:15:30.423 Test: blockdev copy ...passed 00:15:30.423 Suite: bdevio tests on: nvme0n3 00:15:30.423 Test: blockdev write read block ...passed 00:15:30.423 Test: blockdev write zeroes read block ...passed 00:15:30.423 Test: blockdev write zeroes read no split ...passed 00:15:30.423 Test: blockdev write zeroes read split ...passed 00:15:30.423 Test: blockdev write zeroes read split partial ...passed 00:15:30.423 Test: blockdev reset ...passed 00:15:30.423 Test: blockdev write read 8 blocks ...passed 00:15:30.423 Test: blockdev write read size > 128k ...passed 00:15:30.423 Test: blockdev write read invalid size ...passed 00:15:30.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.423 Test: blockdev write read max offset ...passed 00:15:30.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.423 Test: blockdev writev readv 8 blocks ...passed 00:15:30.423 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.423 Test: blockdev writev readv block ...passed 00:15:30.423 Test: blockdev writev readv size > 128k ...passed 00:15:30.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.423 Test: blockdev comparev and writev ...passed 00:15:30.423 Test: blockdev nvme passthru rw ...passed 00:15:30.423 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.423 Test: blockdev nvme admin passthru ...passed 00:15:30.423 Test: blockdev copy ...passed 00:15:30.423 Suite: bdevio tests on: nvme0n2 00:15:30.423 Test: blockdev write read block ...passed 00:15:30.423 Test: blockdev write zeroes read block ...passed 00:15:30.423 Test: blockdev write zeroes read no split ...passed 00:15:30.686 Test: blockdev write zeroes read split ...passed 00:15:30.686 Test: blockdev write zeroes read split partial ...passed 00:15:30.686 Test: blockdev reset ...passed 00:15:30.686 Test: blockdev write read 8 blocks ...passed 00:15:30.686 Test: blockdev write read size > 128k ...passed 00:15:30.686 Test: blockdev write read invalid size ...passed 00:15:30.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.686 Test: blockdev write read max offset ...passed 00:15:30.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.686 Test: blockdev writev readv 8 blocks ...passed 00:15:30.686 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.686 Test: blockdev writev readv block ...passed 00:15:30.686 Test: blockdev writev readv size > 128k ...passed 00:15:30.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.686 Test: blockdev comparev and writev ...passed 00:15:30.686 Test: blockdev nvme passthru rw ...passed 00:15:30.686 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.686 Test: blockdev nvme admin passthru ...passed 00:15:30.686 Test: blockdev copy ...passed 00:15:30.686 Suite: bdevio tests on: nvme0n1 00:15:30.686 Test: blockdev write read block ...passed 00:15:30.686 Test: blockdev write zeroes read block ...passed 00:15:30.686 Test: blockdev write zeroes read no split ...passed 00:15:30.948 Test: blockdev write zeroes read split ...passed 00:15:30.948 Test: blockdev write zeroes read split partial ...passed 00:15:30.948 Test: blockdev reset ...passed 00:15:30.948 Test: blockdev write read 8 blocks ...passed 00:15:30.948 Test: blockdev write read size > 128k ...passed 00:15:30.948 Test: blockdev write read invalid size ...passed 00:15:30.948 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.948 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.948 Test: blockdev write read max offset ...passed 00:15:30.948 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.948 Test: blockdev writev readv 8 blocks ...passed 00:15:30.948 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.948 Test: blockdev writev readv block ...passed 00:15:30.948 Test: blockdev writev readv size > 128k ...passed 00:15:30.948 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.948 Test: blockdev comparev and writev ...passed 00:15:30.948 Test: blockdev nvme passthru rw ...passed 00:15:30.948 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.948 Test: blockdev nvme admin passthru ...passed 00:15:30.948 Test: blockdev copy ...passed 00:15:30.948 00:15:30.948 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.948 suites 6 6 n/a 0 0 00:15:30.948 tests 138 138 138 0 0 00:15:30.948 asserts 780 780 780 0 n/a 00:15:30.948 00:15:30.948 Elapsed time = 1.637 seconds 00:15:30.948 0 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72262 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72262 ']' 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72262 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72262 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72262' 00:15:30.948 killing process with pid 72262 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72262 00:15:30.948 09:48:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72262 00:15:31.893 09:48:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:31.893 00:15:31.893 real 0m2.524s 00:15:31.893 user 0m6.007s 00:15:31.893 sys 0m0.398s 00:15:31.893 09:48:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.893 ************************************ 00:15:31.893 END TEST bdev_bounds 00:15:31.893 09:48:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:31.893 ************************************ 00:15:31.893 09:48:10 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.893 09:48:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.893 09:48:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.893 09:48:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.893 ************************************ 00:15:31.893 START TEST bdev_nbd 00:15:31.893 ************************************ 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72324 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72324 /var/tmp/spdk-nbd.sock 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72324 ']' 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:31.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:31.893 09:48:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:31.893 [2024-11-28 09:48:10.602764] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:31.893 [2024-11-28 09:48:10.602909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.893 [2024-11-28 09:48:10.768648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.155 [2024-11-28 09:48:10.890001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.726 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.726 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:32.726 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.727 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.989 1+0 records in 00:15:32.989 1+0 records out 00:15:32.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103637 s, 4.0 MB/s 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.989 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.251 1+0 records in 00:15:33.251 1+0 records out 00:15:33.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810492 s, 5.1 MB/s 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.251 09:48:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:33.512 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:33.512 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:33.512 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:33.512 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:33.512 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.513 1+0 records in 00:15:33.513 1+0 records out 00:15:33.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107214 s, 3.8 MB/s 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.513 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.773 1+0 records in 00:15:33.773 1+0 records out 00:15:33.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887537 s, 4.6 MB/s 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.773 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.774 1+0 records in 00:15:33.774 1+0 records out 00:15:33.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012772 s, 3.2 MB/s 00:15:33.774 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.035 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.036 1+0 records in 00:15:34.036 1+0 records out 00:15:34.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000954443 s, 4.3 MB/s 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.036 09:48:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd0", 00:15:34.297 "bdev_name": "nvme0n1" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd1", 00:15:34.297 "bdev_name": "nvme0n2" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd2", 00:15:34.297 "bdev_name": "nvme0n3" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd3", 00:15:34.297 "bdev_name": "nvme1n1" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd4", 00:15:34.297 "bdev_name": "nvme2n1" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd5", 00:15:34.297 "bdev_name": "nvme3n1" 00:15:34.297 } 00:15:34.297 ]' 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd0", 00:15:34.297 "bdev_name": "nvme0n1" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd1", 00:15:34.297 "bdev_name": "nvme0n2" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd2", 00:15:34.297 "bdev_name": "nvme0n3" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd3", 00:15:34.297 "bdev_name": "nvme1n1" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd4", 00:15:34.297 "bdev_name": "nvme2n1" 00:15:34.297 }, 00:15:34.297 { 00:15:34.297 "nbd_device": "/dev/nbd5", 00:15:34.297 "bdev_name": "nvme3n1" 00:15:34.297 } 00:15:34.297 ]' 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.297 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.557 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.818 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.079 09:48:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.341 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.613 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:35.879 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:36.138 /dev/nbd0 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.138 1+0 records in 00:15:36.138 1+0 records out 00:15:36.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575163 s, 7.1 MB/s 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.138 09:48:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:36.396 /dev/nbd1 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.396 1+0 records in 00:15:36.396 1+0 records out 00:15:36.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406452 s, 10.1 MB/s 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.396 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:36.657 /dev/nbd10 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.657 1+0 records in 00:15:36.657 1+0 records out 00:15:36.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193885 s, 2.1 MB/s 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.657 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:36.657 /dev/nbd11 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.919 1+0 records in 00:15:36.919 1+0 records out 00:15:36.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143273 s, 2.9 MB/s 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.919 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:36.920 /dev/nbd12 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.182 1+0 records in 00:15:37.182 1+0 records out 00:15:37.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112962 s, 3.6 MB/s 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.182 09:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:37.182 /dev/nbd13 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.182 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.444 1+0 records in 00:15:37.444 1+0 records out 00:15:37.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100021 s, 4.1 MB/s 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd0", 00:15:37.444 "bdev_name": "nvme0n1" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd1", 00:15:37.444 "bdev_name": "nvme0n2" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd10", 00:15:37.444 "bdev_name": "nvme0n3" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd11", 00:15:37.444 "bdev_name": "nvme1n1" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd12", 00:15:37.444 "bdev_name": "nvme2n1" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd13", 00:15:37.444 "bdev_name": "nvme3n1" 00:15:37.444 } 00:15:37.444 ]' 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd0", 00:15:37.444 "bdev_name": "nvme0n1" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd1", 00:15:37.444 "bdev_name": "nvme0n2" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd10", 00:15:37.444 "bdev_name": "nvme0n3" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd11", 00:15:37.444 "bdev_name": "nvme1n1" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd12", 00:15:37.444 "bdev_name": "nvme2n1" 00:15:37.444 }, 00:15:37.444 { 00:15:37.444 "nbd_device": "/dev/nbd13", 00:15:37.444 "bdev_name": "nvme3n1" 00:15:37.444 } 00:15:37.444 ]' 00:15:37.444 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:37.444 /dev/nbd1 00:15:37.444 /dev/nbd10 00:15:37.444 /dev/nbd11 00:15:37.444 /dev/nbd12 00:15:37.444 /dev/nbd13' 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:37.707 /dev/nbd1 00:15:37.707 /dev/nbd10 00:15:37.707 /dev/nbd11 00:15:37.707 /dev/nbd12 00:15:37.707 /dev/nbd13' 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:37.707 256+0 records in 00:15:37.707 256+0 records out 00:15:37.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421711 s, 249 MB/s 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:37.707 256+0 records in 00:15:37.707 256+0 records out 00:15:37.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201553 s, 5.2 MB/s 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.707 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:37.968 256+0 records in 00:15:37.968 256+0 records out 00:15:37.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239182 s, 4.4 MB/s 00:15:37.968 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.968 09:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:38.230 256+0 records in 00:15:38.230 256+0 records out 00:15:38.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232564 s, 4.5 MB/s 00:15:38.230 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.230 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:38.491 256+0 records in 00:15:38.491 256+0 records out 00:15:38.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240127 s, 4.4 MB/s 00:15:38.491 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.491 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:38.753 256+0 records in 00:15:38.753 256+0 records out 00:15:38.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.299297 s, 3.5 MB/s 00:15:38.753 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.753 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:39.015 256+0 records in 00:15:39.015 256+0 records out 00:15:39.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200414 s, 5.2 MB/s 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.015 09:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.277 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.539 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.799 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.057 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.315 09:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.315 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:40.574 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:40.833 malloc_lvol_verify 00:15:40.833 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:41.092 e83c6239-ca02-4385-9628-7fb8c69bdea4 00:15:41.092 09:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:41.350 612fd7af-3102-4542-a890-99d42b81dac4 00:15:41.350 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:41.608 /dev/nbd0 00:15:41.608 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:41.609 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:15:41.609 done 00:15:41.609 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:41.609 00:15:41.609 Allocating group tables: 0/1 done 00:15:41.609 Writing inode tables: 0/1 done 00:15:41.609 Creating journal (1024 blocks): done 00:15:41.609 Writing superblocks and filesystem accounting information: 0/1 done 00:15:41.609 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:41.609 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72324 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72324 ']' 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72324 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72324 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.868 killing process with pid 72324 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72324' 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72324 00:15:41.868 09:48:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72324 00:15:42.436 09:48:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:42.436 00:15:42.436 real 0m10.592s 00:15:42.436 user 0m14.214s 00:15:42.436 sys 0m3.712s 00:15:42.436 09:48:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.436 ************************************ 00:15:42.436 END TEST bdev_nbd 00:15:42.436 ************************************ 00:15:42.436 09:48:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:42.436 09:48:21 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:42.436 09:48:21 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:42.436 09:48:21 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:42.436 09:48:21 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:42.436 09:48:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.437 09:48:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.437 09:48:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.437 ************************************ 00:15:42.437 START TEST bdev_fio 00:15:42.437 ************************************ 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:42.437 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:42.437 ************************************ 00:15:42.437 START TEST bdev_fio_rw_verify 00:15:42.437 ************************************ 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.437 09:48:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.696 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.696 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.696 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.696 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.696 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.696 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.696 fio-3.35 00:15:42.696 Starting 6 threads 00:15:54.925 00:15:54.925 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72732: Thu Nov 28 09:48:32 2024 00:15:54.925 read: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(731MiB/10002msec) 00:15:54.925 slat (usec): min=2, max=2674, avg= 6.05, stdev=16.98 00:15:54.925 clat (usec): min=83, max=6036, avg=948.97, stdev=663.82 00:15:54.925 lat (usec): min=89, max=6057, avg=955.02, stdev=664.63 00:15:54.925 clat percentiles (usec): 00:15:54.925 | 50.000th=[ 816], 99.000th=[ 3064], 99.900th=[ 4293], 99.990th=[ 5276], 00:15:54.925 | 99.999th=[ 5997] 00:15:54.925 write: IOPS=19.0k, BW=74.1MiB/s (77.7MB/s)(741MiB/10002msec); 0 zone resets 00:15:54.925 slat (usec): min=9, max=5446, avg=38.85, stdev=124.09 00:15:54.925 clat (usec): min=43, max=11932, avg=1295.95, stdev=923.69 00:15:54.925 lat (usec): min=59, max=11961, avg=1334.80, stdev=933.98 00:15:54.925 clat percentiles (usec): 00:15:54.925 | 50.000th=[ 1106], 99.000th=[ 4752], 99.900th=[ 7570], 99.990th=[ 9503], 00:15:54.925 | 99.999th=[11863] 00:15:54.925 bw ( KiB/s): min=46568, max=119980, per=100.00%, avg=76737.84, stdev=3233.66, samples=114 00:15:54.925 iops : min=11641, max=29993, avg=19183.42, stdev=808.37, samples=114 00:15:54.925 lat (usec) : 50=0.01%, 100=0.04%, 250=7.72%, 500=14.97%, 750=15.31% 00:15:54.925 lat (usec) : 1000=14.06% 00:15:54.925 lat (msec) : 2=36.20%, 4=10.70%, 10=0.99%, 20=0.01% 00:15:54.925 cpu : usr=40.70%, sys=33.54%, ctx=6230, majf=0, minf=17590 00:15:54.925 IO depths : 1=10.8%, 2=23.0%, 4=51.5%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.925 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.925 issued rwts: total=187185,189768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.925 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:54.925 00:15:54.925 Run status group 0 (all jobs): 00:15:54.925 READ: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=731MiB (767MB), run=10002-10002msec 00:15:54.925 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=741MiB (777MB), run=10002-10002msec 00:15:54.925 ----------------------------------------------------- 00:15:54.925 Suppressions used: 00:15:54.925 count bytes template 00:15:54.925 6 48 /usr/src/fio/parse.c 00:15:54.925 2453 235488 /usr/src/fio/iolog.c 00:15:54.925 1 8 libtcmalloc_minimal.so 00:15:54.925 1 904 libcrypto.so 00:15:54.925 ----------------------------------------------------- 00:15:54.925 00:15:54.925 00:15:54.925 real 0m11.967s 00:15:54.925 user 0m25.934s 00:15:54.925 sys 0m20.442s 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:54.925 ************************************ 00:15:54.925 END TEST bdev_fio_rw_verify 00:15:54.925 ************************************ 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:54.925 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "317ccd10-2359-4d07-94cc-ae57646d54bd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "317ccd10-2359-4d07-94cc-ae57646d54bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "cd92664d-ec14-4861-9cb3-8a7534862339"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd92664d-ec14-4861-9cb3-8a7534862339",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "e89038e9-c663-43b0-b946-0dff474c1469"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e89038e9-c663-43b0-b946-0dff474c1469",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2643570d-3fb4-4117-bf49-d8c9c4b8297e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2643570d-3fb4-4117-bf49-d8c9c4b8297e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d41c30af-1a36-4880-a15e-cffc3a6d3691"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d41c30af-1a36-4880-a15e-cffc3a6d3691",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5eafa407-5fd9-40be-ace3-eded51f493cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5eafa407-5fd9-40be-ace3-eded51f493cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.926 /home/vagrant/spdk_repo/spdk 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:54.926 00:15:54.926 real 0m12.150s 00:15:54.926 user 0m26.017s 00:15:54.926 sys 0m20.521s 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.926 09:48:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:54.926 ************************************ 00:15:54.926 END TEST bdev_fio 00:15:54.926 ************************************ 00:15:54.926 09:48:33 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:54.926 09:48:33 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:54.926 09:48:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:54.926 09:48:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.926 09:48:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.926 ************************************ 00:15:54.926 START TEST bdev_verify 00:15:54.926 ************************************ 00:15:54.926 09:48:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:54.926 [2024-11-28 09:48:33.464944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:54.926 [2024-11-28 09:48:33.465095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72901 ] 00:15:54.926 [2024-11-28 09:48:33.629071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.926 [2024-11-28 09:48:33.756092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.926 [2024-11-28 09:48:33.756203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.497 Running I/O for 5 seconds... 00:15:57.826 23328.00 IOPS, 91.12 MiB/s [2024-11-28T09:48:37.651Z] 22992.00 IOPS, 89.81 MiB/s [2024-11-28T09:48:38.595Z] 23200.00 IOPS, 90.62 MiB/s [2024-11-28T09:48:39.541Z] 23280.00 IOPS, 90.94 MiB/s [2024-11-28T09:48:39.541Z] 22844.20 IOPS, 89.24 MiB/s 00:16:00.661 Latency(us) 00:16:00.661 [2024-11-28T09:48:39.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.661 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x0 length 0x80000 00:16:00.661 nvme0n1 : 5.04 1778.19 6.95 0.00 0.00 71853.78 15526.99 77836.60 00:16:00.661 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x80000 length 0x80000 00:16:00.661 nvme0n1 : 5.05 1925.24 7.52 0.00 0.00 66362.40 7410.61 64931.05 00:16:00.661 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x0 length 0x80000 00:16:00.661 nvme0n2 : 5.06 1771.13 6.92 0.00 0.00 71999.67 13107.20 79046.50 00:16:00.661 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x80000 length 0x80000 00:16:00.661 nvme0n2 : 5.05 1899.33 7.42 0.00 0.00 67136.37 8822.15 67754.14 00:16:00.661 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x0 length 0x80000 00:16:00.661 nvme0n3 : 5.06 1795.87 7.02 0.00 0.00 70891.23 9175.04 79853.10 00:16:00.661 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x80000 length 0x80000 00:16:00.661 nvme0n3 : 5.06 1924.05 7.52 0.00 0.00 66145.37 5797.42 70173.93 00:16:00.661 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x0 length 0x20000 00:16:00.661 nvme1n1 : 5.06 1795.17 7.01 0.00 0.00 70787.68 10737.82 75820.11 00:16:00.661 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x20000 length 0x20000 00:16:00.661 nvme1n1 : 5.04 1930.66 7.54 0.00 0.00 65791.20 8217.21 65334.35 00:16:00.661 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x0 length 0xbd0bd 00:16:00.661 nvme2n1 : 5.07 2213.86 8.65 0.00 0.00 57172.08 4562.31 137928.07 00:16:00.661 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:00.661 nvme2n1 : 5.08 2281.98 8.91 0.00 0.00 55370.05 913.72 137121.48 00:16:00.661 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0x0 length 0xa0000 00:16:00.661 nvme3n1 : 5.07 1741.72 6.80 0.00 0.00 72728.98 4738.76 87112.47 00:16:00.661 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.661 Verification LBA range: start 0xa0000 length 0xa0000 00:16:00.661 nvme3n1 : 5.07 1515.40 5.92 0.00 0.00 83398.72 2369.38 112116.97 00:16:00.661 [2024-11-28T09:48:39.541Z] =================================================================================================================== 00:16:00.661 [2024-11-28T09:48:39.541Z] Total : 22572.61 88.17 0.00 0.00 67563.75 913.72 137928.07 00:16:01.607 00:16:01.607 real 0m6.764s 00:16:01.607 user 0m10.921s 00:16:01.607 sys 0m1.475s 00:16:01.607 09:48:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.607 ************************************ 00:16:01.607 END TEST bdev_verify 00:16:01.607 ************************************ 00:16:01.607 09:48:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:01.607 09:48:40 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.607 09:48:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:01.607 09:48:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.607 09:48:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.607 ************************************ 00:16:01.607 START TEST bdev_verify_big_io 00:16:01.607 ************************************ 00:16:01.607 09:48:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.607 [2024-11-28 09:48:40.316120] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:01.607 [2024-11-28 09:48:40.316306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73001 ] 00:16:01.607 [2024-11-28 09:48:40.484406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.869 [2024-11-28 09:48:40.604201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.869 [2024-11-28 09:48:40.604204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.442 Running I/O for 5 seconds... 00:16:08.565 2864.00 IOPS, 179.00 MiB/s [2024-11-28T09:48:47.445Z] 3045.50 IOPS, 190.34 MiB/s 00:16:08.565 Latency(us) 00:16:08.565 [2024-11-28T09:48:47.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.565 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x0 length 0x8000 00:16:08.565 nvme0n1 : 5.93 107.96 6.75 0.00 0.00 1136171.85 6377.16 1025991.29 00:16:08.565 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x8000 length 0x8000 00:16:08.565 nvme0n1 : 5.81 154.33 9.65 0.00 0.00 786570.52 27021.00 993727.41 00:16:08.565 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x0 length 0x8000 00:16:08.565 nvme0n2 : 5.93 83.63 5.23 0.00 0.00 1431136.30 196809.65 2555299.05 00:16:08.565 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x8000 length 0x8000 00:16:08.565 nvme0n2 : 5.79 138.08 8.63 0.00 0.00 867238.90 58478.28 1096971.82 00:16:08.565 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x0 length 0x8000 00:16:08.565 nvme0n3 : 5.93 107.85 6.74 0.00 0.00 1081529.90 150027.03 1290555.08 00:16:08.565 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x8000 length 0x8000 00:16:08.565 nvme0n3 : 5.80 140.77 8.80 0.00 0.00 828184.26 137928.07 1471232.79 00:16:08.565 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x0 length 0x2000 00:16:08.565 nvme1n1 : 5.94 96.93 6.06 0.00 0.00 1188220.54 6301.54 3239293.24 00:16:08.565 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x2000 length 0x2000 00:16:08.565 nvme1n1 : 5.81 159.75 9.98 0.00 0.00 714774.76 6553.60 1206669.00 00:16:08.565 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x0 length 0xbd0b 00:16:08.565 nvme2n1 : 5.94 126.50 7.91 0.00 0.00 882391.43 17442.66 1471232.79 00:16:08.565 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:08.565 nvme2n1 : 5.89 172.99 10.81 0.00 0.00 639372.68 3428.04 1264743.98 00:16:08.565 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0x0 length 0xa000 00:16:08.565 nvme3n1 : 5.95 137.15 8.57 0.00 0.00 789051.42 5923.45 1180857.90 00:16:08.565 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.565 Verification LBA range: start 0xa000 length 0xa000 00:16:08.565 nvme3n1 : 5.90 162.67 10.17 0.00 0.00 662746.81 639.61 935652.43 00:16:08.565 [2024-11-28T09:48:47.445Z] =================================================================================================================== 00:16:08.565 [2024-11-28T09:48:47.445Z] Total : 1588.61 99.29 0.00 0.00 872302.97 639.61 3239293.24 00:16:09.512 00:16:09.512 real 0m7.921s 00:16:09.512 user 0m14.418s 00:16:09.512 sys 0m0.503s 00:16:09.512 09:48:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.512 09:48:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.512 ************************************ 00:16:09.512 END TEST bdev_verify_big_io 00:16:09.512 ************************************ 00:16:09.512 09:48:48 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.512 09:48:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:09.512 09:48:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.512 09:48:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.512 ************************************ 00:16:09.512 START TEST bdev_write_zeroes 00:16:09.512 ************************************ 00:16:09.512 09:48:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.512 [2024-11-28 09:48:48.307221] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:09.512 [2024-11-28 09:48:48.307360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73111 ] 00:16:09.774 [2024-11-28 09:48:48.475776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.774 [2024-11-28 09:48:48.600365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.347 Running I/O for 1 seconds... 00:16:11.299 78336.00 IOPS, 306.00 MiB/s 00:16:11.299 Latency(us) 00:16:11.299 [2024-11-28T09:48:50.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.299 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.299 nvme0n1 : 1.02 12755.26 49.83 0.00 0.00 10024.98 5016.02 25811.10 00:16:11.299 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.299 nvme0n2 : 1.02 12740.85 49.77 0.00 0.00 10027.07 5217.67 26012.75 00:16:11.299 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.299 nvme0n3 : 1.03 12675.85 49.52 0.00 0.00 10067.74 5293.29 23290.49 00:16:11.299 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.299 nvme1n1 : 1.03 12661.79 49.46 0.00 0.00 10071.66 5419.32 22887.19 00:16:11.299 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.299 nvme2n1 : 1.03 13859.77 54.14 0.00 0.00 9168.45 5116.85 22887.19 00:16:11.299 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.299 nvme3n1 : 1.03 12635.94 49.36 0.00 0.00 10017.35 3957.37 26617.70 00:16:11.299 [2024-11-28T09:48:50.179Z] =================================================================================================================== 00:16:11.299 [2024-11-28T09:48:50.179Z] Total : 77329.46 302.07 0.00 0.00 9884.79 3957.37 26617.70 00:16:12.261 00:16:12.261 real 0m2.668s 00:16:12.261 user 0m1.947s 00:16:12.261 sys 0m0.518s 00:16:12.261 09:48:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.261 ************************************ 00:16:12.261 END TEST bdev_write_zeroes 00:16:12.261 ************************************ 00:16:12.261 09:48:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:12.261 09:48:50 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.261 09:48:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:12.261 09:48:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.261 09:48:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.261 ************************************ 00:16:12.261 START TEST bdev_json_nonenclosed 00:16:12.261 ************************************ 00:16:12.261 09:48:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.261 [2024-11-28 09:48:51.042784] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:12.261 [2024-11-28 09:48:51.042921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:16:12.553 [2024-11-28 09:48:51.207666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.553 [2024-11-28 09:48:51.329697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.553 [2024-11-28 09:48:51.329798] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:12.553 [2024-11-28 09:48:51.329818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:12.553 [2024-11-28 09:48:51.329829] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.825 00:16:12.825 real 0m0.556s 00:16:12.825 user 0m0.334s 00:16:12.825 sys 0m0.115s 00:16:12.825 ************************************ 00:16:12.825 09:48:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.825 09:48:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:12.825 END TEST bdev_json_nonenclosed 00:16:12.825 ************************************ 00:16:12.825 09:48:51 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.825 09:48:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:12.825 09:48:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.825 09:48:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.825 ************************************ 00:16:12.825 START TEST bdev_json_nonarray 00:16:12.825 ************************************ 00:16:12.825 09:48:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.825 [2024-11-28 09:48:51.678454] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:12.825 [2024-11-28 09:48:51.678613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73185 ] 00:16:13.085 [2024-11-28 09:48:51.844864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.346 [2024-11-28 09:48:51.966273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.346 [2024-11-28 09:48:51.966388] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:13.346 [2024-11-28 09:48:51.966409] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:13.346 [2024-11-28 09:48:51.966419] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.346 00:16:13.346 real 0m0.564s 00:16:13.346 user 0m0.343s 00:16:13.346 sys 0m0.114s 00:16:13.346 09:48:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.346 ************************************ 00:16:13.346 END TEST bdev_json_nonarray 00:16:13.346 ************************************ 00:16:13.346 09:48:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:13.346 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:13.346 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:13.346 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:13.346 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:13.346 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:13.346 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:13.607 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:13.607 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:13.607 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:13.607 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:13.607 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:13.607 09:48:52 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:13.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:18.073 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.335 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.335 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.335 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:18.597 00:16:18.597 real 0m55.152s 00:16:18.597 user 1m20.563s 00:16:18.597 sys 0m37.235s 00:16:18.597 09:48:57 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.597 ************************************ 00:16:18.597 END TEST blockdev_xnvme 00:16:18.597 09:48:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.597 ************************************ 00:16:18.597 09:48:57 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:18.597 09:48:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:18.597 09:48:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.597 09:48:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.597 ************************************ 00:16:18.597 START TEST ublk 00:16:18.597 ************************************ 00:16:18.597 09:48:57 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:18.597 * Looking for test storage... 00:16:18.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.598 09:48:57 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.598 09:48:57 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.598 09:48:57 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.598 09:48:57 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.598 09:48:57 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.598 09:48:57 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:18.598 09:48:57 ublk -- scripts/common.sh@345 -- # : 1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.598 09:48:57 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.598 09:48:57 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@353 -- # local d=1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.598 09:48:57 ublk -- scripts/common.sh@355 -- # echo 1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.598 09:48:57 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@353 -- # local d=2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.598 09:48:57 ublk -- scripts/common.sh@355 -- # echo 2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.598 09:48:57 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.598 09:48:57 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.598 09:48:57 ublk -- scripts/common.sh@368 -- # return 0 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.598 --rc genhtml_branch_coverage=1 00:16:18.598 --rc genhtml_function_coverage=1 00:16:18.598 --rc genhtml_legend=1 00:16:18.598 --rc geninfo_all_blocks=1 00:16:18.598 --rc geninfo_unexecuted_blocks=1 00:16:18.598 00:16:18.598 ' 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.598 --rc genhtml_branch_coverage=1 00:16:18.598 --rc genhtml_function_coverage=1 00:16:18.598 --rc genhtml_legend=1 00:16:18.598 --rc geninfo_all_blocks=1 00:16:18.598 --rc geninfo_unexecuted_blocks=1 00:16:18.598 00:16:18.598 ' 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.598 --rc genhtml_branch_coverage=1 00:16:18.598 --rc genhtml_function_coverage=1 00:16:18.598 --rc genhtml_legend=1 00:16:18.598 --rc geninfo_all_blocks=1 00:16:18.598 --rc geninfo_unexecuted_blocks=1 00:16:18.598 00:16:18.598 ' 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:18.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.598 --rc genhtml_branch_coverage=1 00:16:18.598 --rc genhtml_function_coverage=1 00:16:18.598 --rc genhtml_legend=1 00:16:18.598 --rc geninfo_all_blocks=1 00:16:18.598 --rc geninfo_unexecuted_blocks=1 00:16:18.598 00:16:18.598 ' 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:18.598 09:48:57 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:18.598 09:48:57 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:18.598 09:48:57 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:18.598 09:48:57 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:18.598 09:48:57 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:18.598 09:48:57 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:18.598 09:48:57 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:18.598 09:48:57 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:18.598 09:48:57 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.598 09:48:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.859 ************************************ 00:16:18.860 START TEST test_save_ublk_config 00:16:18.860 ************************************ 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73482 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73482 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73482 ']' 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.860 09:48:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.860 [2024-11-28 09:48:57.575794] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:18.860 [2024-11-28 09:48:57.575948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73482 ] 00:16:18.860 [2024-11-28 09:48:57.738074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.120 [2024-11-28 09:48:57.860751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.060 [2024-11-28 09:48:58.579178] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:20.060 [2024-11-28 09:48:58.580090] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:20.060 malloc0 00:16:20.060 [2024-11-28 09:48:58.651322] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:20.060 [2024-11-28 09:48:58.651421] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:20.060 [2024-11-28 09:48:58.651432] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:20.060 [2024-11-28 09:48:58.651440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:20.060 [2024-11-28 09:48:58.660281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:20.060 [2024-11-28 09:48:58.660310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:20.060 [2024-11-28 09:48:58.667210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:20.060 [2024-11-28 09:48:58.667328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:20.060 [2024-11-28 09:48:58.684179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:20.060 0 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.060 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.321 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.321 09:48:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:20.321 "subsystems": [ 00:16:20.321 { 00:16:20.321 "subsystem": "fsdev", 00:16:20.321 "config": [ 00:16:20.321 { 00:16:20.322 "method": "fsdev_set_opts", 00:16:20.322 "params": { 00:16:20.322 "fsdev_io_pool_size": 65535, 00:16:20.322 "fsdev_io_cache_size": 256 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "keyring", 00:16:20.322 "config": [] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "iobuf", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "iobuf_set_options", 00:16:20.322 "params": { 00:16:20.322 "small_pool_count": 8192, 00:16:20.322 "large_pool_count": 1024, 00:16:20.322 "small_bufsize": 8192, 00:16:20.322 "large_bufsize": 135168, 00:16:20.322 "enable_numa": false 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "sock", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "sock_set_default_impl", 00:16:20.322 "params": { 00:16:20.322 "impl_name": "posix" 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "sock_impl_set_options", 00:16:20.322 "params": { 00:16:20.322 "impl_name": "ssl", 00:16:20.322 "recv_buf_size": 4096, 00:16:20.322 "send_buf_size": 4096, 00:16:20.322 "enable_recv_pipe": true, 00:16:20.322 "enable_quickack": false, 00:16:20.322 "enable_placement_id": 0, 00:16:20.322 "enable_zerocopy_send_server": true, 00:16:20.322 "enable_zerocopy_send_client": false, 00:16:20.322 "zerocopy_threshold": 0, 00:16:20.322 "tls_version": 0, 00:16:20.322 "enable_ktls": false 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "sock_impl_set_options", 00:16:20.322 "params": { 00:16:20.322 "impl_name": "posix", 00:16:20.322 "recv_buf_size": 2097152, 00:16:20.322 "send_buf_size": 2097152, 00:16:20.322 "enable_recv_pipe": true, 00:16:20.322 "enable_quickack": false, 00:16:20.322 "enable_placement_id": 0, 00:16:20.322 "enable_zerocopy_send_server": true, 00:16:20.322 "enable_zerocopy_send_client": false, 00:16:20.322 "zerocopy_threshold": 0, 00:16:20.322 "tls_version": 0, 00:16:20.322 "enable_ktls": false 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "vmd", 00:16:20.322 "config": [] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "accel", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "accel_set_options", 00:16:20.322 "params": { 00:16:20.322 "small_cache_size": 128, 00:16:20.322 "large_cache_size": 16, 00:16:20.322 "task_count": 2048, 00:16:20.322 "sequence_count": 2048, 00:16:20.322 "buf_count": 2048 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "bdev", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "bdev_set_options", 00:16:20.322 "params": { 00:16:20.322 "bdev_io_pool_size": 65535, 00:16:20.322 "bdev_io_cache_size": 256, 00:16:20.322 "bdev_auto_examine": true, 00:16:20.322 "iobuf_small_cache_size": 128, 00:16:20.322 "iobuf_large_cache_size": 16 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "bdev_raid_set_options", 00:16:20.322 "params": { 00:16:20.322 "process_window_size_kb": 1024, 00:16:20.322 "process_max_bandwidth_mb_sec": 0 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "bdev_iscsi_set_options", 00:16:20.322 "params": { 00:16:20.322 "timeout_sec": 30 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "bdev_nvme_set_options", 00:16:20.322 "params": { 00:16:20.322 "action_on_timeout": "none", 00:16:20.322 "timeout_us": 0, 00:16:20.322 "timeout_admin_us": 0, 00:16:20.322 "keep_alive_timeout_ms": 10000, 00:16:20.322 "arbitration_burst": 0, 00:16:20.322 "low_priority_weight": 0, 00:16:20.322 "medium_priority_weight": 0, 00:16:20.322 "high_priority_weight": 0, 00:16:20.322 "nvme_adminq_poll_period_us": 10000, 00:16:20.322 "nvme_ioq_poll_period_us": 0, 00:16:20.322 "io_queue_requests": 0, 00:16:20.322 "delay_cmd_submit": true, 00:16:20.322 "transport_retry_count": 4, 00:16:20.322 "bdev_retry_count": 3, 00:16:20.322 "transport_ack_timeout": 0, 00:16:20.322 "ctrlr_loss_timeout_sec": 0, 00:16:20.322 "reconnect_delay_sec": 0, 00:16:20.322 "fast_io_fail_timeout_sec": 0, 00:16:20.322 "disable_auto_failback": false, 00:16:20.322 "generate_uuids": false, 00:16:20.322 "transport_tos": 0, 00:16:20.322 "nvme_error_stat": false, 00:16:20.322 "rdma_srq_size": 0, 00:16:20.322 "io_path_stat": false, 00:16:20.322 "allow_accel_sequence": false, 00:16:20.322 "rdma_max_cq_size": 0, 00:16:20.322 "rdma_cm_event_timeout_ms": 0, 00:16:20.322 "dhchap_digests": [ 00:16:20.322 "sha256", 00:16:20.322 "sha384", 00:16:20.322 "sha512" 00:16:20.322 ], 00:16:20.322 "dhchap_dhgroups": [ 00:16:20.322 "null", 00:16:20.322 "ffdhe2048", 00:16:20.322 "ffdhe3072", 00:16:20.322 "ffdhe4096", 00:16:20.322 "ffdhe6144", 00:16:20.322 "ffdhe8192" 00:16:20.322 ] 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "bdev_nvme_set_hotplug", 00:16:20.322 "params": { 00:16:20.322 "period_us": 100000, 00:16:20.322 "enable": false 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "bdev_malloc_create", 00:16:20.322 "params": { 00:16:20.322 "name": "malloc0", 00:16:20.322 "num_blocks": 8192, 00:16:20.322 "block_size": 4096, 00:16:20.322 "physical_block_size": 4096, 00:16:20.322 "uuid": "757bd7b3-6414-4c71-a039-794c33cd6fa6", 00:16:20.322 "optimal_io_boundary": 0, 00:16:20.322 "md_size": 0, 00:16:20.322 "dif_type": 0, 00:16:20.322 "dif_is_head_of_md": false, 00:16:20.322 "dif_pi_format": 0 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "bdev_wait_for_examine" 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "scsi", 00:16:20.322 "config": null 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "scheduler", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "framework_set_scheduler", 00:16:20.322 "params": { 00:16:20.322 "name": "static" 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "vhost_scsi", 00:16:20.322 "config": [] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "vhost_blk", 00:16:20.322 "config": [] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "ublk", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "ublk_create_target", 00:16:20.322 "params": { 00:16:20.322 "cpumask": "1" 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "ublk_start_disk", 00:16:20.322 "params": { 00:16:20.322 "bdev_name": "malloc0", 00:16:20.322 "ublk_id": 0, 00:16:20.322 "num_queues": 1, 00:16:20.322 "queue_depth": 128 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "nbd", 00:16:20.322 "config": [] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "nvmf", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "nvmf_set_config", 00:16:20.322 "params": { 00:16:20.322 "discovery_filter": "match_any", 00:16:20.322 "admin_cmd_passthru": { 00:16:20.322 "identify_ctrlr": false 00:16:20.322 }, 00:16:20.322 "dhchap_digests": [ 00:16:20.322 "sha256", 00:16:20.322 "sha384", 00:16:20.322 "sha512" 00:16:20.322 ], 00:16:20.322 "dhchap_dhgroups": [ 00:16:20.322 "null", 00:16:20.322 "ffdhe2048", 00:16:20.322 "ffdhe3072", 00:16:20.322 "ffdhe4096", 00:16:20.322 "ffdhe6144", 00:16:20.322 "ffdhe8192" 00:16:20.322 ] 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "nvmf_set_max_subsystems", 00:16:20.322 "params": { 00:16:20.322 "max_subsystems": 1024 00:16:20.322 } 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "method": "nvmf_set_crdt", 00:16:20.322 "params": { 00:16:20.322 "crdt1": 0, 00:16:20.322 "crdt2": 0, 00:16:20.322 "crdt3": 0 00:16:20.322 } 00:16:20.322 } 00:16:20.322 ] 00:16:20.322 }, 00:16:20.322 { 00:16:20.322 "subsystem": "iscsi", 00:16:20.322 "config": [ 00:16:20.322 { 00:16:20.322 "method": "iscsi_set_options", 00:16:20.322 "params": { 00:16:20.322 "node_base": "iqn.2016-06.io.spdk", 00:16:20.322 "max_sessions": 128, 00:16:20.322 "max_connections_per_session": 2, 00:16:20.322 "max_queue_depth": 64, 00:16:20.322 "default_time2wait": 2, 00:16:20.322 "default_time2retain": 20, 00:16:20.322 "first_burst_length": 8192, 00:16:20.322 "immediate_data": true, 00:16:20.322 "allow_duplicated_isid": false, 00:16:20.322 "error_recovery_level": 0, 00:16:20.322 "nop_timeout": 60, 00:16:20.322 "nop_in_interval": 30, 00:16:20.322 "disable_chap": false, 00:16:20.323 "require_chap": false, 00:16:20.323 "mutual_chap": false, 00:16:20.323 "chap_group": 0, 00:16:20.323 "max_large_datain_per_connection": 64, 00:16:20.323 "max_r2t_per_connection": 4, 00:16:20.323 "pdu_pool_size": 36864, 00:16:20.323 "immediate_data_pool_size": 16384, 00:16:20.323 "data_out_pool_size": 2048 00:16:20.323 } 00:16:20.323 } 00:16:20.323 ] 00:16:20.323 } 00:16:20.323 ] 00:16:20.323 }' 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73482 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73482 ']' 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73482 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73482 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.323 killing process with pid 73482 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73482' 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73482 00:16:20.323 09:48:58 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73482 00:16:21.261 [2024-11-28 09:49:00.078967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:21.261 [2024-11-28 09:49:00.110239] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:21.261 [2024-11-28 09:49:00.110331] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:21.261 [2024-11-28 09:49:00.118181] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:21.262 [2024-11-28 09:49:00.118221] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:21.262 [2024-11-28 09:49:00.118231] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:21.262 [2024-11-28 09:49:00.118250] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:21.262 [2024-11-28 09:49:00.118355] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:22.639 09:49:01 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73537 00:16:22.639 09:49:01 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73537 00:16:22.639 09:49:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:22.639 09:49:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:22.639 "subsystems": [ 00:16:22.639 { 00:16:22.639 "subsystem": "fsdev", 00:16:22.639 "config": [ 00:16:22.639 { 00:16:22.639 "method": "fsdev_set_opts", 00:16:22.639 "params": { 00:16:22.639 "fsdev_io_pool_size": 65535, 00:16:22.639 "fsdev_io_cache_size": 256 00:16:22.639 } 00:16:22.639 } 00:16:22.639 ] 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "subsystem": "keyring", 00:16:22.639 "config": [] 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "subsystem": "iobuf", 00:16:22.639 "config": [ 00:16:22.639 { 00:16:22.639 "method": "iobuf_set_options", 00:16:22.639 "params": { 00:16:22.639 "small_pool_count": 8192, 00:16:22.639 "large_pool_count": 1024, 00:16:22.639 "small_bufsize": 8192, 00:16:22.639 "large_bufsize": 135168, 00:16:22.639 "enable_numa": false 00:16:22.639 } 00:16:22.639 } 00:16:22.639 ] 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "subsystem": "sock", 00:16:22.639 "config": [ 00:16:22.639 { 00:16:22.639 "method": "sock_set_default_impl", 00:16:22.639 "params": { 00:16:22.639 "impl_name": "posix" 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "sock_impl_set_options", 00:16:22.639 "params": { 00:16:22.639 "impl_name": "ssl", 00:16:22.639 "recv_buf_size": 4096, 00:16:22.639 "send_buf_size": 4096, 00:16:22.639 "enable_recv_pipe": true, 00:16:22.639 "enable_quickack": false, 00:16:22.639 "enable_placement_id": 0, 00:16:22.639 "enable_zerocopy_send_server": true, 00:16:22.639 "enable_zerocopy_send_client": false, 00:16:22.639 "zerocopy_threshold": 0, 00:16:22.639 "tls_version": 0, 00:16:22.639 "enable_ktls": false 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "sock_impl_set_options", 00:16:22.639 "params": { 00:16:22.639 "impl_name": "posix", 00:16:22.639 "recv_buf_size": 2097152, 00:16:22.639 "send_buf_size": 2097152, 00:16:22.639 "enable_recv_pipe": true, 00:16:22.639 "enable_quickack": false, 00:16:22.639 "enable_placement_id": 0, 00:16:22.639 "enable_zerocopy_send_server": true, 00:16:22.639 "enable_zerocopy_send_client": false, 00:16:22.639 "zerocopy_threshold": 0, 00:16:22.639 "tls_version": 0, 00:16:22.639 "enable_ktls": false 00:16:22.639 } 00:16:22.639 } 00:16:22.639 ] 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "subsystem": "vmd", 00:16:22.639 "config": [] 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "subsystem": "accel", 00:16:22.639 "config": [ 00:16:22.639 { 00:16:22.639 "method": "accel_set_options", 00:16:22.639 "params": { 00:16:22.639 "small_cache_size": 128, 00:16:22.639 "large_cache_size": 16, 00:16:22.639 "task_count": 2048, 00:16:22.639 "sequence_count": 2048, 00:16:22.639 "buf_count": 2048 00:16:22.639 } 00:16:22.639 } 00:16:22.639 ] 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "subsystem": "bdev", 00:16:22.639 "config": [ 00:16:22.639 { 00:16:22.639 "method": "bdev_set_options", 00:16:22.639 "params": { 00:16:22.639 "bdev_io_pool_size": 65535, 00:16:22.639 "bdev_io_cache_size": 256, 00:16:22.639 "bdev_auto_examine": true, 00:16:22.639 "iobuf_small_cache_size": 128, 00:16:22.639 "iobuf_large_cache_size": 16 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "bdev_raid_set_options", 00:16:22.639 "params": { 00:16:22.639 "process_window_size_kb": 1024, 00:16:22.639 "process_max_bandwidth_mb_sec": 0 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "bdev_iscsi_set_options", 00:16:22.639 "params": { 00:16:22.639 "timeout_sec": 30 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "bdev_nvme_set_options", 00:16:22.639 "params": { 00:16:22.639 "action_on_timeout": "none", 00:16:22.639 "timeout_us": 0, 00:16:22.639 "timeout_admin_us": 0, 00:16:22.639 "keep_alive_timeout_ms": 10000, 00:16:22.639 "arbitration_burst": 0, 00:16:22.639 "low_priority_weight": 0, 00:16:22.639 "medium_priority_weight": 0, 00:16:22.639 "high_priority_weight": 0, 00:16:22.639 "nvme_adminq_poll_period_us": 10000, 00:16:22.639 "nvme_ioq_poll_period_us": 0, 00:16:22.639 "io_queue_requests": 0, 00:16:22.639 "delay_cmd_submit": true, 00:16:22.639 "transport_retry_count": 4, 00:16:22.639 "bdev_retry_count": 3, 00:16:22.639 "transport_ack_timeout": 0, 00:16:22.639 "ctrlr_loss_timeout_sec": 0, 00:16:22.639 "reconnect_delay_sec": 0, 00:16:22.639 "fast_io_fail_timeout_sec": 0, 00:16:22.639 "disable_auto_failback": false, 00:16:22.639 "generate_uuids": false, 00:16:22.639 "transport_tos": 0, 00:16:22.639 "nvme_error_stat": false, 00:16:22.639 "rdma_srq_size": 0, 00:16:22.639 "io_path_stat": false, 00:16:22.639 "allow_accel_sequence": false, 00:16:22.639 "rdma_max_cq_size": 0, 00:16:22.639 "rdma_cm_event_timeout_ms": 0, 00:16:22.639 "dhchap_digests": [ 00:16:22.639 "sha256", 00:16:22.639 "sha384", 00:16:22.639 "sha512" 00:16:22.639 ], 00:16:22.639 "dhchap_dhgroups": [ 00:16:22.639 "null", 00:16:22.639 "ffdhe2048", 00:16:22.639 "ffdhe3072", 00:16:22.639 "ffdhe4096", 00:16:22.639 "ffdhe6144", 00:16:22.639 "ffdhe8192" 00:16:22.639 ] 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "bdev_nvme_set_hotplug", 00:16:22.639 "params": { 00:16:22.639 "period_us": 100000, 00:16:22.639 "enable": false 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "bdev_malloc_create", 00:16:22.639 "params": { 00:16:22.639 "name": "malloc0", 00:16:22.639 "num_blocks": 8192, 00:16:22.639 "block_size": 4096, 00:16:22.639 "physical_block_size": 4096, 00:16:22.639 "uuid": "757bd7b3-6414-4c71-a039-794c33cd6fa6", 00:16:22.639 "optimal_io_boundary": 0, 00:16:22.639 "md_size": 0, 00:16:22.639 "dif_type": 0, 00:16:22.639 "dif_is_head_of_md": false, 00:16:22.639 "dif_pi_format": 0 00:16:22.639 } 00:16:22.639 }, 00:16:22.639 { 00:16:22.639 "method": "bdev_wait_for_examine" 00:16:22.639 } 00:16:22.639 ] 00:16:22.639 }, 00:16:22.640 { 00:16:22.640 "subsystem": "scsi", 00:16:22.640 "config": null 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "scheduler", 00:16:22.640 "config": [ 00:16:22.640 { 00:16:22.640 "method": "framework_set_scheduler", 00:16:22.640 "params": { 00:16:22.640 "name": "static" 00:16:22.640 } 00:16:22.640 } 00:16:22.640 ] 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "vhost_scsi", 00:16:22.640 "config": [] 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "vhost_blk", 00:16:22.640 "config": [] 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "ublk", 00:16:22.640 "config": [ 00:16:22.640 { 00:16:22.640 "method": "ublk_create_target", 00:16:22.640 "params": { 00:16:22.640 "cpumask": "1" 00:16:22.640 } 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "method": "ublk_start_disk", 00:16:22.640 "params": { 00:16:22.640 "bdev_name": "malloc0", 00:16:22.640 "ublk_id": 0, 00:16:22.640 "num_queues": 1, 00:16:22.640 "queue_depth": 128 00:16:22.640 } 00:16:22.640 } 00:16:22.640 ] 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "nbd", 00:16:22.640 "config": [] 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "nvmf", 00:16:22.640 "config": [ 00:16:22.640 { 00:16:22.640 "method": "nvmf_set_config", 00:16:22.640 "params": { 00:16:22.640 "discovery_filter": "match_any", 00:16:22.640 "admin_cmd_passthru": { 00:16:22.640 "identify_ctrlr": false 00:16:22.640 }, 00:16:22.640 "dhchap_digests": [ 00:16:22.640 "sha256", 00:16:22.640 "sha384", 00:16:22.640 "sha512" 00:16:22.640 ], 00:16:22.640 "dhchap_dhgroups": [ 00:16:22.640 "null", 00:16:22.640 "ffdhe2048", 00:16:22.640 "ffdhe3072", 00:16:22.640 "ffdhe4096", 00:16:22.640 "ffdhe6144", 00:16:22.640 "ffdhe8192" 00:16:22.640 ] 00:16:22.640 } 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "method": "nvmf_set_max_subsystems", 00:16:22.640 "params": { 00:16:22.640 "max_subsystems": 1024 00:16:22.640 } 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "method": "nvmf_set_crdt", 00:16:22.640 "params": { 00:16:22.640 "crdt1": 0, 00:16:22.640 "crdt2": 0, 00:16:22.640 "crdt3": 0 00:16:22.640 } 00:16:22.640 } 00:16:22.640 ] 00:16:22.640 }, 00:16:22.640 { 00:16:22.640 "subsystem": "iscsi", 00:16:22.640 "config": [ 00:16:22.640 { 00:16:22.640 "method": "iscsi_set_options", 00:16:22.640 "params": { 00:16:22.640 "node_base": "iqn.2016-06.io.spdk", 00:16:22.640 "max_sessions": 128, 00:16:22.640 "max_connections_per_session": 2, 00:16:22.640 "max_queue_depth": 64, 00:16:22.640 "default_time2wait": 2, 00:16:22.640 "default_time2retain": 20, 00:16:22.640 "first_burst_length": 8192, 00:16:22.640 "immediate_data": true, 00:16:22.640 "allow_duplicated_isid": false, 00:16:22.640 "error_recovery_level": 0, 00:16:22.640 "nop_timeout": 60, 00:16:22.640 "nop_in_interval": 30, 00:16:22.640 "disable_chap": false, 00:16:22.640 "require_chap": false, 00:16:22.640 "mutual_chap": false, 00:16:22.640 "chap_group": 0, 00:16:22.640 "max_large_datain_per_connection": 64, 00:16:22.640 "max_r2t_per_connection": 4, 00:16:22.640 "pdu_pool_size": 36864, 00:16:22.640 "immediate_data_pool_size": 16384, 00:16:22.640 "data_out_pool_size": 2048 00:16:22.640 } 00:16:22.640 } 00:16:22.640 ] 00:16:22.640 } 00:16:22.640 ] 00:16:22.640 }' 00:16:22.640 09:49:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73537 ']' 00:16:22.640 09:49:01 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.640 09:49:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.640 09:49:01 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.640 09:49:01 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.640 09:49:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.640 [2024-11-28 09:49:01.513621] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:22.640 [2024-11-28 09:49:01.513732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73537 ] 00:16:22.899 [2024-11-28 09:49:01.667636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.899 [2024-11-28 09:49:01.752804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.833 [2024-11-28 09:49:02.397168] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.833 [2024-11-28 09:49:02.397812] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:23.833 [2024-11-28 09:49:02.405252] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:23.833 [2024-11-28 09:49:02.405308] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:23.833 [2024-11-28 09:49:02.405315] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:23.833 [2024-11-28 09:49:02.405321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:23.833 [2024-11-28 09:49:02.414218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:23.833 [2024-11-28 09:49:02.414235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:23.833 [2024-11-28 09:49:02.421175] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:23.833 [2024-11-28 09:49:02.421248] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:23.833 [2024-11-28 09:49:02.438166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73537 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73537 ']' 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73537 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73537 00:16:23.833 killing process with pid 73537 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73537' 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73537 00:16:23.833 09:49:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73537 00:16:24.768 [2024-11-28 09:49:03.597648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:24.768 [2024-11-28 09:49:03.633234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:24.768 [2024-11-28 09:49:03.633336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:24.768 [2024-11-28 09:49:03.642173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:24.768 [2024-11-28 09:49:03.642213] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:24.768 [2024-11-28 09:49:03.642219] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:24.768 [2024-11-28 09:49:03.642239] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.768 [2024-11-28 09:49:03.642350] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:26.141 09:49:04 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:26.141 00:16:26.141 real 0m7.329s 00:16:26.141 user 0m4.780s 00:16:26.141 sys 0m3.199s 00:16:26.141 ************************************ 00:16:26.141 09:49:04 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.141 09:49:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:26.141 END TEST test_save_ublk_config 00:16:26.141 ************************************ 00:16:26.141 09:49:04 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73609 00:16:26.141 09:49:04 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.141 09:49:04 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73609 00:16:26.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.141 09:49:04 ublk -- common/autotest_common.sh@835 -- # '[' -z 73609 ']' 00:16:26.141 09:49:04 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.141 09:49:04 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.141 09:49:04 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:26.141 09:49:04 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.141 09:49:04 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.141 09:49:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.141 [2024-11-28 09:49:04.934809] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:26.141 [2024-11-28 09:49:04.934928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73609 ] 00:16:26.400 [2024-11-28 09:49:05.091029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.400 [2024-11-28 09:49:05.167567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.400 [2024-11-28 09:49:05.167649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.965 09:49:05 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.965 09:49:05 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:26.965 09:49:05 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:26.965 09:49:05 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.965 09:49:05 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.965 09:49:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.965 ************************************ 00:16:26.965 START TEST test_create_ublk 00:16:26.965 ************************************ 00:16:26.966 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:26.966 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:26.966 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.966 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.966 [2024-11-28 09:49:05.793170] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:26.966 [2024-11-28 09:49:05.794624] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:26.966 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.966 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:26.966 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:26.966 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.966 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.224 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.224 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:27.224 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:27.224 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.224 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.224 [2024-11-28 09:49:05.956268] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:27.224 [2024-11-28 09:49:05.956565] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:27.225 [2024-11-28 09:49:05.956578] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:27.225 [2024-11-28 09:49:05.956584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.225 [2024-11-28 09:49:05.964180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.225 [2024-11-28 09:49:05.964197] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.225 [2024-11-28 09:49:05.972173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.225 [2024-11-28 09:49:05.972664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:27.225 [2024-11-28 09:49:05.990216] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.225 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.225 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:27.225 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:27.225 09:49:05 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:27.225 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.225 09:49:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.225 09:49:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:27.225 { 00:16:27.225 "ublk_device": "/dev/ublkb0", 00:16:27.225 "id": 0, 00:16:27.225 "queue_depth": 512, 00:16:27.225 "num_queues": 4, 00:16:27.225 "bdev_name": "Malloc0" 00:16:27.225 } 00:16:27.225 ]' 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:27.225 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:27.483 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:27.483 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:27.483 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:27.483 09:49:06 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:27.483 09:49:06 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:27.483 fio: verification read phase will never start because write phase uses all of runtime 00:16:27.483 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:27.483 fio-3.35 00:16:27.483 Starting 1 process 00:16:39.683 00:16:39.683 fio_test: (groupid=0, jobs=1): err= 0: pid=73655: Thu Nov 28 09:49:16 2024 00:16:39.683 write: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(682MiB/10001msec); 0 zone resets 00:16:39.683 clat (usec): min=38, max=4104, avg=56.48, stdev=91.13 00:16:39.683 lat (usec): min=39, max=4105, avg=56.93, stdev=91.15 00:16:39.683 clat percentiles (usec): 00:16:39.683 | 1.00th=[ 44], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:16:39.683 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:16:39.683 | 70.00th=[ 55], 80.00th=[ 56], 90.00th=[ 60], 95.00th=[ 67], 00:16:39.683 | 99.00th=[ 79], 99.50th=[ 174], 99.90th=[ 1729], 99.95th=[ 2671], 00:16:39.683 | 99.99th=[ 3589] 00:16:39.683 bw ( KiB/s): min=64112, max=73944, per=99.99%, avg=69821.05, stdev=2035.93, samples=19 00:16:39.683 iops : min=16028, max=18486, avg=17455.26, stdev=508.93, samples=19 00:16:39.683 lat (usec) : 50=36.04%, 100=63.32%, 250=0.41%, 500=0.07%, 750=0.01% 00:16:39.683 lat (usec) : 1000=0.01% 00:16:39.683 lat (msec) : 2=0.05%, 4=0.08%, 10=0.01% 00:16:39.683 cpu : usr=2.95%, sys=17.31%, ctx=174577, majf=0, minf=795 00:16:39.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.683 issued rwts: total=0,174586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.683 00:16:39.683 Run status group 0 (all jobs): 00:16:39.683 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=682MiB (715MB), run=10001-10001msec 00:16:39.683 00:16:39.683 Disk stats (read/write): 00:16:39.683 ublkb0: ios=0/172723, merge=0/0, ticks=0/7621, in_queue=7621, util=99.08% 00:16:39.683 09:49:16 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.683 [2024-11-28 09:49:16.405124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.683 [2024-11-28 09:49:16.440639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.683 [2024-11-28 09:49:16.441530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.683 [2024-11-28 09:49:16.448183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.683 [2024-11-28 09:49:16.448407] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.683 [2024-11-28 09:49:16.448420] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.683 09:49:16 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.683 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.683 [2024-11-28 09:49:16.464225] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:39.683 request: 00:16:39.683 { 00:16:39.683 "ublk_id": 0, 00:16:39.684 "method": "ublk_stop_disk", 00:16:39.684 "req_id": 1 00:16:39.684 } 00:16:39.684 Got JSON-RPC error response 00:16:39.684 response: 00:16:39.684 { 00:16:39.684 "code": -19, 00:16:39.684 "message": "No such device" 00:16:39.684 } 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.684 09:49:16 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 [2024-11-28 09:49:16.480222] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.684 [2024-11-28 09:49:16.483776] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.684 [2024-11-28 09:49:16.483808] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:16 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:16 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:39.684 ************************************ 00:16:39.684 END TEST test_create_ublk 00:16:39.684 ************************************ 00:16:39.684 09:49:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:39.684 00:16:39.684 real 0m11.169s 00:16:39.684 user 0m0.597s 00:16:39.684 sys 0m1.803s 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.684 09:49:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:49:16 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:39.684 09:49:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.684 09:49:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.684 09:49:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 ************************************ 00:16:39.684 START TEST test_create_multi_ublk 00:16:39.684 ************************************ 00:16:39.684 09:49:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:39.684 09:49:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:39.684 09:49:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 [2024-11-28 09:49:17.004167] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.684 [2024-11-28 09:49:17.005716] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 [2024-11-28 09:49:17.220267] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:39.684 [2024-11-28 09:49:17.220567] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:39.684 [2024-11-28 09:49:17.220579] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:39.684 [2024-11-28 09:49:17.220587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.684 [2024-11-28 09:49:17.244171] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.684 [2024-11-28 09:49:17.244191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.684 [2024-11-28 09:49:17.256180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.684 [2024-11-28 09:49:17.256669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:39.684 [2024-11-28 09:49:17.269194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.684 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 [2024-11-28 09:49:17.488259] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:39.684 [2024-11-28 09:49:17.488550] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:39.684 [2024-11-28 09:49:17.488563] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:39.684 [2024-11-28 09:49:17.488568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.684 [2024-11-28 09:49:17.496182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.684 [2024-11-28 09:49:17.496200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.684 [2024-11-28 09:49:17.504174] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.684 [2024-11-28 09:49:17.504655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:39.685 [2024-11-28 09:49:17.513204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.685 [2024-11-28 09:49:17.672263] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:39.685 [2024-11-28 09:49:17.672561] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:39.685 [2024-11-28 09:49:17.672571] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:39.685 [2024-11-28 09:49:17.672578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.685 [2024-11-28 09:49:17.680180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.685 [2024-11-28 09:49:17.680199] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.685 [2024-11-28 09:49:17.688181] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.685 [2024-11-28 09:49:17.688683] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:39.685 [2024-11-28 09:49:17.697192] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.685 [2024-11-28 09:49:17.856262] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:39.685 [2024-11-28 09:49:17.856551] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:39.685 [2024-11-28 09:49:17.856564] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:39.685 [2024-11-28 09:49:17.856570] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.685 [2024-11-28 09:49:17.864190] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.685 [2024-11-28 09:49:17.864207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.685 [2024-11-28 09:49:17.872171] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.685 [2024-11-28 09:49:17.872651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:39.685 [2024-11-28 09:49:17.881199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:39.685 { 00:16:39.685 "ublk_device": "/dev/ublkb0", 00:16:39.685 "id": 0, 00:16:39.685 "queue_depth": 512, 00:16:39.685 "num_queues": 4, 00:16:39.685 "bdev_name": "Malloc0" 00:16:39.685 }, 00:16:39.685 { 00:16:39.685 "ublk_device": "/dev/ublkb1", 00:16:39.685 "id": 1, 00:16:39.685 "queue_depth": 512, 00:16:39.685 "num_queues": 4, 00:16:39.685 "bdev_name": "Malloc1" 00:16:39.685 }, 00:16:39.685 { 00:16:39.685 "ublk_device": "/dev/ublkb2", 00:16:39.685 "id": 2, 00:16:39.685 "queue_depth": 512, 00:16:39.685 "num_queues": 4, 00:16:39.685 "bdev_name": "Malloc2" 00:16:39.685 }, 00:16:39.685 { 00:16:39.685 "ublk_device": "/dev/ublkb3", 00:16:39.685 "id": 3, 00:16:39.685 "queue_depth": 512, 00:16:39.685 "num_queues": 4, 00:16:39.685 "bdev_name": "Malloc3" 00:16:39.685 } 00:16:39.685 ]' 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:39.685 09:49:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:39.685 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.686 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.686 [2024-11-28 09:49:18.560245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.945 [2024-11-28 09:49:18.597200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.945 [2024-11-28 09:49:18.597869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.945 [2024-11-28 09:49:18.609201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.945 [2024-11-28 09:49:18.609421] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.945 [2024-11-28 09:49:18.609435] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.945 [2024-11-28 09:49:18.624220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.945 [2024-11-28 09:49:18.656618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.945 [2024-11-28 09:49:18.657600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.945 [2024-11-28 09:49:18.664179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.945 [2024-11-28 09:49:18.664387] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:39.945 [2024-11-28 09:49:18.664395] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.945 [2024-11-28 09:49:18.680247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.945 [2024-11-28 09:49:18.716206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.945 [2024-11-28 09:49:18.716802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.945 [2024-11-28 09:49:18.724176] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.945 [2024-11-28 09:49:18.724398] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:39.945 [2024-11-28 09:49:18.724406] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.945 [2024-11-28 09:49:18.740230] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.945 [2024-11-28 09:49:18.770595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.945 [2024-11-28 09:49:18.771506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.945 [2024-11-28 09:49:18.780180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.945 [2024-11-28 09:49:18.780405] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:39.945 [2024-11-28 09:49:18.780418] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.945 09:49:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:40.204 [2024-11-28 09:49:18.980214] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.204 [2024-11-28 09:49:18.983755] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:40.204 [2024-11-28 09:49:18.983782] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:40.204 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:40.204 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.204 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:40.204 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.204 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.770 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.770 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.770 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:40.770 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.771 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.029 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.029 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.029 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:41.029 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.029 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.288 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.288 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.288 09:49:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:41.288 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.288 09:49:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:41.288 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:41.546 ************************************ 00:16:41.546 END TEST test_create_multi_ublk 00:16:41.546 ************************************ 00:16:41.546 09:49:20 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:41.546 00:16:41.546 real 0m3.196s 00:16:41.546 user 0m0.826s 00:16:41.546 sys 0m0.157s 00:16:41.546 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.546 09:49:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.546 09:49:20 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:41.546 09:49:20 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:41.546 09:49:20 ublk -- ublk/ublk.sh@130 -- # killprocess 73609 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@954 -- # '[' -z 73609 ']' 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@958 -- # kill -0 73609 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@959 -- # uname 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73609 00:16:41.546 killing process with pid 73609 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73609' 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@973 -- # kill 73609 00:16:41.546 09:49:20 ublk -- common/autotest_common.sh@978 -- # wait 73609 00:16:42.113 [2024-11-28 09:49:20.959328] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:42.113 [2024-11-28 09:49:20.959568] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.052 00:16:43.052 real 0m24.428s 00:16:43.052 user 0m34.820s 00:16:43.052 sys 0m10.223s 00:16:43.052 09:49:21 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.052 09:49:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 ************************************ 00:16:43.052 END TEST ublk 00:16:43.052 ************************************ 00:16:43.052 09:49:21 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:43.052 09:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.052 09:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.052 09:49:21 -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 ************************************ 00:16:43.052 START TEST ublk_recovery 00:16:43.052 ************************************ 00:16:43.052 09:49:21 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:43.052 * Looking for test storage... 00:16:43.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:43.052 09:49:21 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:43.052 09:49:21 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:43.052 09:49:21 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:43.052 09:49:21 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:43.052 09:49:21 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:43.314 09:49:21 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.314 09:49:21 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:43.314 09:49:21 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.314 09:49:21 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.314 09:49:21 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.314 09:49:21 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:43.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.314 --rc genhtml_branch_coverage=1 00:16:43.314 --rc genhtml_function_coverage=1 00:16:43.314 --rc genhtml_legend=1 00:16:43.314 --rc geninfo_all_blocks=1 00:16:43.314 --rc geninfo_unexecuted_blocks=1 00:16:43.314 00:16:43.314 ' 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:43.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.314 --rc genhtml_branch_coverage=1 00:16:43.314 --rc genhtml_function_coverage=1 00:16:43.314 --rc genhtml_legend=1 00:16:43.314 --rc geninfo_all_blocks=1 00:16:43.314 --rc geninfo_unexecuted_blocks=1 00:16:43.314 00:16:43.314 ' 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:43.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.314 --rc genhtml_branch_coverage=1 00:16:43.314 --rc genhtml_function_coverage=1 00:16:43.314 --rc genhtml_legend=1 00:16:43.314 --rc geninfo_all_blocks=1 00:16:43.314 --rc geninfo_unexecuted_blocks=1 00:16:43.314 00:16:43.314 ' 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:43.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.314 --rc genhtml_branch_coverage=1 00:16:43.314 --rc genhtml_function_coverage=1 00:16:43.314 --rc genhtml_legend=1 00:16:43.314 --rc geninfo_all_blocks=1 00:16:43.314 --rc geninfo_unexecuted_blocks=1 00:16:43.314 00:16:43.314 ' 00:16:43.314 09:49:21 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:43.314 09:49:21 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:43.314 09:49:21 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:43.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.314 09:49:21 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74001 00:16:43.314 09:49:21 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.314 09:49:21 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74001 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74001 ']' 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.314 09:49:21 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.314 09:49:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.314 [2024-11-28 09:49:22.029742] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:43.314 [2024-11-28 09:49:22.029894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74001 ] 00:16:43.314 [2024-11-28 09:49:22.191105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.575 [2024-11-28 09:49:22.284854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.575 [2024-11-28 09:49:22.284906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:44.147 09:49:22 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.147 [2024-11-28 09:49:22.869179] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:44.147 [2024-11-28 09:49:22.870889] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.147 09:49:22 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.147 malloc0 00:16:44.147 09:49:22 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.147 09:49:22 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:44.148 09:49:22 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.148 09:49:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.148 [2024-11-28 09:49:22.965282] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:44.148 [2024-11-28 09:49:22.965378] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:44.148 [2024-11-28 09:49:22.965388] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:44.148 [2024-11-28 09:49:22.965397] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.148 [2024-11-28 09:49:22.974271] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.148 [2024-11-28 09:49:22.974289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.148 [2024-11-28 09:49:22.981189] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.148 [2024-11-28 09:49:22.981318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:44.148 [2024-11-28 09:49:22.996185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.148 1 00:16:44.148 09:49:23 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.148 09:49:23 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:45.529 09:49:24 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74036 00:16:45.529 09:49:24 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:45.529 09:49:24 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:45.529 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.529 fio-3.35 00:16:45.529 Starting 1 process 00:16:50.792 09:49:29 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74001 00:16:50.792 09:49:29 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:56.178 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74001 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:56.178 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:56.178 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74148 00:16:56.178 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.178 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74148 00:16:56.178 09:49:34 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74148 ']' 00:16:56.178 09:49:34 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.178 09:49:34 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.178 09:49:34 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 [2024-11-28 09:49:34.088821] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:56.179 [2024-11-28 09:49:34.088924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74148 ] 00:16:56.179 [2024-11-28 09:49:34.239795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:56.179 [2024-11-28 09:49:34.331335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.179 [2024-11-28 09:49:34.331391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:56.179 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 [2024-11-28 09:49:34.879177] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:56.179 [2024-11-28 09:49:34.880854] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 malloc0 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.179 [2024-11-28 09:49:34.967330] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:56.179 [2024-11-28 09:49:34.967362] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:56.179 [2024-11-28 09:49:34.967371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:56.179 [2024-11-28 09:49:34.975201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:56.179 [2024-11-28 09:49:34.975223] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:56.179 [2024-11-28 09:49:34.975230] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:56.179 [2024-11-28 09:49:34.975300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:56.179 1 00:16:56.179 09:49:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.179 09:49:34 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74036 00:16:56.179 [2024-11-28 09:49:34.982211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:56.179 [2024-11-28 09:49:34.987508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:56.179 [2024-11-28 09:49:34.994351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:56.179 [2024-11-28 09:49:34.994370] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:52.408 00:17:52.408 fio_test: (groupid=0, jobs=1): err= 0: pid=74039: Thu Nov 28 09:50:24 2024 00:17:52.408 read: IOPS=25.1k, BW=98.1MiB/s (103MB/s)(5886MiB/60003msec) 00:17:52.408 slat (nsec): min=1116, max=740001, avg=5453.95, stdev=2273.67 00:17:52.408 clat (usec): min=799, max=5993.5k, avg=2525.26, stdev=40830.01 00:17:52.408 lat (usec): min=805, max=5993.5k, avg=2530.71, stdev=40830.00 00:17:52.408 clat percentiles (usec): 00:17:52.408 | 1.00th=[ 1844], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 2040], 00:17:52.408 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2147], 00:17:52.408 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3228], 00:17:52.408 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7373], 99.95th=[12649], 00:17:52.408 | 99.99th=[13173] 00:17:52.408 bw ( KiB/s): min=24320, max=123536, per=100.00%, avg=110631.63, stdev=13191.96, samples=108 00:17:52.408 iops : min= 6080, max=30884, avg=27657.91, stdev=3297.99, samples=108 00:17:52.408 write: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(5878MiB/60003msec); 0 zone resets 00:17:52.408 slat (nsec): min=1179, max=531534, avg=5660.65, stdev=2190.00 00:17:52.408 clat (usec): min=717, max=5993.5k, avg=2563.19, stdev=37192.42 00:17:52.408 lat (usec): min=723, max=5993.5k, avg=2568.85, stdev=37192.41 00:17:52.408 clat percentiles (usec): 00:17:52.408 | 1.00th=[ 1893], 5.00th=[ 2024], 10.00th=[ 2057], 20.00th=[ 2147], 00:17:52.408 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2245], 60.00th=[ 2245], 00:17:52.408 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2343], 95.00th=[ 3130], 00:17:52.408 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 7439], 99.95th=[12387], 00:17:52.408 | 99.99th=[13304] 00:17:52.408 bw ( KiB/s): min=24736, max=123032, per=100.00%, avg=110496.00, stdev=13266.41, samples=108 00:17:52.408 iops : min= 6184, max=30758, avg=27624.00, stdev=3316.60, samples=108 00:17:52.408 lat (usec) : 750=0.01%, 1000=0.01% 00:17:52.408 lat (msec) : 2=8.72%, 4=88.47%, 10=2.75%, 20=0.05%, >=2000=0.01% 00:17:52.408 cpu : usr=5.58%, sys=28.78%, ctx=100700, majf=0, minf=13 00:17:52.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:52.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:52.408 issued rwts: total=1506830,1504767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:52.408 00:17:52.408 Run status group 0 (all jobs): 00:17:52.408 READ: bw=98.1MiB/s (103MB/s), 98.1MiB/s-98.1MiB/s (103MB/s-103MB/s), io=5886MiB (6172MB), run=60003-60003msec 00:17:52.408 WRITE: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=5878MiB (6164MB), run=60003-60003msec 00:17:52.408 00:17:52.408 Disk stats (read/write): 00:17:52.408 ublkb1: ios=1503699/1501685, merge=0/0, ticks=3711656/3634313, in_queue=7345970, util=99.89% 00:17:52.408 09:50:24 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:52.408 09:50:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.408 09:50:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.408 [2024-11-28 09:50:24.258081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:52.408 [2024-11-28 09:50:24.289309] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:52.408 [2024-11-28 09:50:24.289456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:52.408 [2024-11-28 09:50:24.296181] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:52.409 [2024-11-28 09:50:24.296271] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:52.409 [2024-11-28 09:50:24.296281] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.409 09:50:24 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.409 [2024-11-28 09:50:24.312257] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:52.409 [2024-11-28 09:50:24.320176] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:52.409 [2024-11-28 09:50:24.320209] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.409 09:50:24 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:52.409 09:50:24 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:52.409 09:50:24 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74148 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74148 ']' 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74148 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74148 00:17:52.409 killing process with pid 74148 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74148' 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74148 00:17:52.409 09:50:24 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74148 00:17:52.409 [2024-11-28 09:50:25.407624] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:52.409 [2024-11-28 09:50:25.407681] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:52.409 00:17:52.409 real 1m4.386s 00:17:52.409 user 1m40.422s 00:17:52.409 sys 0m38.068s 00:17:52.409 09:50:26 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.409 09:50:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:52.409 ************************************ 00:17:52.409 END TEST ublk_recovery 00:17:52.409 ************************************ 00:17:52.409 09:50:26 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:52.409 09:50:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:52.409 09:50:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.409 09:50:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.409 09:50:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:52.409 09:50:26 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:52.409 09:50:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.409 09:50:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.409 09:50:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.409 ************************************ 00:17:52.409 START TEST ftl 00:17:52.409 ************************************ 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:52.409 * Looking for test storage... 00:17:52.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.409 09:50:26 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.409 09:50:26 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.409 09:50:26 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.409 09:50:26 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.409 09:50:26 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.409 09:50:26 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:52.409 09:50:26 ftl -- scripts/common.sh@345 -- # : 1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.409 09:50:26 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.409 09:50:26 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@353 -- # local d=1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.409 09:50:26 ftl -- scripts/common.sh@355 -- # echo 1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.409 09:50:26 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@353 -- # local d=2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.409 09:50:26 ftl -- scripts/common.sh@355 -- # echo 2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.409 09:50:26 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.409 09:50:26 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.409 09:50:26 ftl -- scripts/common.sh@368 -- # return 0 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.409 --rc genhtml_branch_coverage=1 00:17:52.409 --rc genhtml_function_coverage=1 00:17:52.409 --rc genhtml_legend=1 00:17:52.409 --rc geninfo_all_blocks=1 00:17:52.409 --rc geninfo_unexecuted_blocks=1 00:17:52.409 00:17:52.409 ' 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.409 --rc genhtml_branch_coverage=1 00:17:52.409 --rc genhtml_function_coverage=1 00:17:52.409 --rc genhtml_legend=1 00:17:52.409 --rc geninfo_all_blocks=1 00:17:52.409 --rc geninfo_unexecuted_blocks=1 00:17:52.409 00:17:52.409 ' 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.409 --rc genhtml_branch_coverage=1 00:17:52.409 --rc genhtml_function_coverage=1 00:17:52.409 --rc genhtml_legend=1 00:17:52.409 --rc geninfo_all_blocks=1 00:17:52.409 --rc geninfo_unexecuted_blocks=1 00:17:52.409 00:17:52.409 ' 00:17:52.409 09:50:26 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.409 --rc genhtml_branch_coverage=1 00:17:52.409 --rc genhtml_function_coverage=1 00:17:52.409 --rc genhtml_legend=1 00:17:52.409 --rc geninfo_all_blocks=1 00:17:52.409 --rc geninfo_unexecuted_blocks=1 00:17:52.409 00:17:52.409 ' 00:17:52.409 09:50:26 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.409 09:50:26 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:52.409 09:50:26 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.409 09:50:26 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.409 09:50:26 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.409 09:50:26 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.409 09:50:26 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.409 09:50:26 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.409 09:50:26 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.409 09:50:26 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.409 09:50:26 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.409 09:50:26 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.409 09:50:26 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.409 09:50:26 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.409 09:50:26 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.409 09:50:26 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.409 09:50:26 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.409 09:50:26 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.409 09:50:26 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.409 09:50:26 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.409 09:50:26 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.409 09:50:26 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.409 09:50:26 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.409 09:50:26 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.409 09:50:26 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.409 09:50:26 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.409 09:50:26 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.409 09:50:26 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.409 09:50:26 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.409 09:50:26 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.409 09:50:26 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:52.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:52.410 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.410 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.410 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.410 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74953 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:52.410 09:50:26 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74953 00:17:52.410 09:50:26 ftl -- common/autotest_common.sh@835 -- # '[' -z 74953 ']' 00:17:52.410 09:50:26 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.410 09:50:26 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.410 09:50:26 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.410 09:50:26 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.410 09:50:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.410 [2024-11-28 09:50:26.971258] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:52.410 [2024-11-28 09:50:26.971535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74953 ] 00:17:52.410 [2024-11-28 09:50:27.132504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.410 [2024-11-28 09:50:27.230636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.410 09:50:27 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.410 09:50:27 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:52.410 09:50:27 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:52.410 09:50:28 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:52.410 09:50:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:52.410 09:50:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@50 -- # break 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@63 -- # break 00:17:52.410 09:50:29 ftl -- ftl/ftl.sh@66 -- # killprocess 74953 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 74953 ']' 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@958 -- # kill -0 74953 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@959 -- # uname 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74953 00:17:52.410 killing process with pid 74953 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74953' 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@973 -- # kill 74953 00:17:52.410 09:50:29 ftl -- common/autotest_common.sh@978 -- # wait 74953 00:17:52.410 09:50:30 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:52.410 09:50:30 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:52.410 09:50:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:52.410 09:50:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.410 09:50:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.410 ************************************ 00:17:52.410 START TEST ftl_fio_basic 00:17:52.410 ************************************ 00:17:52.410 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:52.410 * Looking for test storage... 00:17:52.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.410 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.410 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.410 09:50:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.410 --rc genhtml_branch_coverage=1 00:17:52.410 --rc genhtml_function_coverage=1 00:17:52.410 --rc genhtml_legend=1 00:17:52.410 --rc geninfo_all_blocks=1 00:17:52.410 --rc geninfo_unexecuted_blocks=1 00:17:52.410 00:17:52.410 ' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.410 --rc genhtml_branch_coverage=1 00:17:52.410 --rc genhtml_function_coverage=1 00:17:52.410 --rc genhtml_legend=1 00:17:52.410 --rc geninfo_all_blocks=1 00:17:52.410 --rc geninfo_unexecuted_blocks=1 00:17:52.410 00:17:52.410 ' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.410 --rc genhtml_branch_coverage=1 00:17:52.410 --rc genhtml_function_coverage=1 00:17:52.410 --rc genhtml_legend=1 00:17:52.410 --rc geninfo_all_blocks=1 00:17:52.410 --rc geninfo_unexecuted_blocks=1 00:17:52.410 00:17:52.410 ' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.410 --rc genhtml_branch_coverage=1 00:17:52.410 --rc genhtml_function_coverage=1 00:17:52.410 --rc genhtml_legend=1 00:17:52.410 --rc geninfo_all_blocks=1 00:17:52.410 --rc geninfo_unexecuted_blocks=1 00:17:52.410 00:17:52.410 ' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.410 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75084 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75084 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75084 ']' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.411 09:50:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:52.411 [2024-11-28 09:50:31.123846] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:52.411 [2024-11-28 09:50:31.124076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75084 ] 00:17:52.411 [2024-11-28 09:50:31.279377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.671 [2024-11-28 09:50:31.372335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.671 [2024-11-28 09:50:31.372591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.671 [2024-11-28 09:50:31.372613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:53.243 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:53.504 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:53.766 { 00:17:53.766 "name": "nvme0n1", 00:17:53.766 "aliases": [ 00:17:53.766 "7cfe6f56-032c-4b69-83b7-d30b7a304e16" 00:17:53.766 ], 00:17:53.766 "product_name": "NVMe disk", 00:17:53.766 "block_size": 4096, 00:17:53.766 "num_blocks": 1310720, 00:17:53.766 "uuid": "7cfe6f56-032c-4b69-83b7-d30b7a304e16", 00:17:53.766 "numa_id": -1, 00:17:53.766 "assigned_rate_limits": { 00:17:53.766 "rw_ios_per_sec": 0, 00:17:53.766 "rw_mbytes_per_sec": 0, 00:17:53.766 "r_mbytes_per_sec": 0, 00:17:53.766 "w_mbytes_per_sec": 0 00:17:53.766 }, 00:17:53.766 "claimed": false, 00:17:53.766 "zoned": false, 00:17:53.766 "supported_io_types": { 00:17:53.766 "read": true, 00:17:53.766 "write": true, 00:17:53.766 "unmap": true, 00:17:53.766 "flush": true, 00:17:53.766 "reset": true, 00:17:53.766 "nvme_admin": true, 00:17:53.766 "nvme_io": true, 00:17:53.766 "nvme_io_md": false, 00:17:53.766 "write_zeroes": true, 00:17:53.766 "zcopy": false, 00:17:53.766 "get_zone_info": false, 00:17:53.766 "zone_management": false, 00:17:53.766 "zone_append": false, 00:17:53.766 "compare": true, 00:17:53.766 "compare_and_write": false, 00:17:53.766 "abort": true, 00:17:53.766 "seek_hole": false, 00:17:53.766 "seek_data": false, 00:17:53.766 "copy": true, 00:17:53.766 "nvme_iov_md": false 00:17:53.766 }, 00:17:53.766 "driver_specific": { 00:17:53.766 "nvme": [ 00:17:53.766 { 00:17:53.766 "pci_address": "0000:00:11.0", 00:17:53.766 "trid": { 00:17:53.766 "trtype": "PCIe", 00:17:53.766 "traddr": "0000:00:11.0" 00:17:53.766 }, 00:17:53.766 "ctrlr_data": { 00:17:53.766 "cntlid": 0, 00:17:53.766 "vendor_id": "0x1b36", 00:17:53.766 "model_number": "QEMU NVMe Ctrl", 00:17:53.766 "serial_number": "12341", 00:17:53.766 "firmware_revision": "8.0.0", 00:17:53.766 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:53.766 "oacs": { 00:17:53.766 "security": 0, 00:17:53.766 "format": 1, 00:17:53.766 "firmware": 0, 00:17:53.766 "ns_manage": 1 00:17:53.766 }, 00:17:53.766 "multi_ctrlr": false, 00:17:53.766 "ana_reporting": false 00:17:53.766 }, 00:17:53.766 "vs": { 00:17:53.766 "nvme_version": "1.4" 00:17:53.766 }, 00:17:53.766 "ns_data": { 00:17:53.766 "id": 1, 00:17:53.766 "can_share": false 00:17:53.766 } 00:17:53.766 } 00:17:53.766 ], 00:17:53.766 "mp_policy": "active_passive" 00:17:53.766 } 00:17:53.766 } 00:17:53.766 ]' 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:53.766 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:54.027 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:54.027 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:54.289 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5719a2cf-c88e-43dd-abe7-dabfc6afc72a 00:17:54.289 09:50:32 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5719a2cf-c88e-43dd-abe7-dabfc6afc72a 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.549 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.549 { 00:17:54.549 "name": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:54.549 "aliases": [ 00:17:54.549 "lvs/nvme0n1p0" 00:17:54.549 ], 00:17:54.549 "product_name": "Logical Volume", 00:17:54.549 "block_size": 4096, 00:17:54.549 "num_blocks": 26476544, 00:17:54.549 "uuid": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:54.549 "assigned_rate_limits": { 00:17:54.549 "rw_ios_per_sec": 0, 00:17:54.549 "rw_mbytes_per_sec": 0, 00:17:54.549 "r_mbytes_per_sec": 0, 00:17:54.549 "w_mbytes_per_sec": 0 00:17:54.549 }, 00:17:54.549 "claimed": false, 00:17:54.549 "zoned": false, 00:17:54.549 "supported_io_types": { 00:17:54.549 "read": true, 00:17:54.549 "write": true, 00:17:54.549 "unmap": true, 00:17:54.549 "flush": false, 00:17:54.549 "reset": true, 00:17:54.549 "nvme_admin": false, 00:17:54.549 "nvme_io": false, 00:17:54.549 "nvme_io_md": false, 00:17:54.549 "write_zeroes": true, 00:17:54.549 "zcopy": false, 00:17:54.549 "get_zone_info": false, 00:17:54.549 "zone_management": false, 00:17:54.549 "zone_append": false, 00:17:54.549 "compare": false, 00:17:54.549 "compare_and_write": false, 00:17:54.549 "abort": false, 00:17:54.549 "seek_hole": true, 00:17:54.549 "seek_data": true, 00:17:54.549 "copy": false, 00:17:54.549 "nvme_iov_md": false 00:17:54.549 }, 00:17:54.549 "driver_specific": { 00:17:54.549 "lvol": { 00:17:54.550 "lvol_store_uuid": "5719a2cf-c88e-43dd-abe7-dabfc6afc72a", 00:17:54.550 "base_bdev": "nvme0n1", 00:17:54.550 "thin_provision": true, 00:17:54.550 "num_allocated_clusters": 0, 00:17:54.550 "snapshot": false, 00:17:54.550 "clone": false, 00:17:54.550 "esnap_clone": false 00:17:54.550 } 00:17:54.550 } 00:17:54.550 } 00:17:54.550 ]' 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:54.550 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.809 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.069 { 00:17:55.069 "name": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:55.069 "aliases": [ 00:17:55.069 "lvs/nvme0n1p0" 00:17:55.069 ], 00:17:55.069 "product_name": "Logical Volume", 00:17:55.069 "block_size": 4096, 00:17:55.069 "num_blocks": 26476544, 00:17:55.069 "uuid": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:55.069 "assigned_rate_limits": { 00:17:55.069 "rw_ios_per_sec": 0, 00:17:55.069 "rw_mbytes_per_sec": 0, 00:17:55.069 "r_mbytes_per_sec": 0, 00:17:55.069 "w_mbytes_per_sec": 0 00:17:55.069 }, 00:17:55.069 "claimed": false, 00:17:55.069 "zoned": false, 00:17:55.069 "supported_io_types": { 00:17:55.069 "read": true, 00:17:55.069 "write": true, 00:17:55.069 "unmap": true, 00:17:55.069 "flush": false, 00:17:55.069 "reset": true, 00:17:55.069 "nvme_admin": false, 00:17:55.069 "nvme_io": false, 00:17:55.069 "nvme_io_md": false, 00:17:55.069 "write_zeroes": true, 00:17:55.069 "zcopy": false, 00:17:55.069 "get_zone_info": false, 00:17:55.069 "zone_management": false, 00:17:55.069 "zone_append": false, 00:17:55.069 "compare": false, 00:17:55.069 "compare_and_write": false, 00:17:55.069 "abort": false, 00:17:55.069 "seek_hole": true, 00:17:55.069 "seek_data": true, 00:17:55.069 "copy": false, 00:17:55.069 "nvme_iov_md": false 00:17:55.069 }, 00:17:55.069 "driver_specific": { 00:17:55.069 "lvol": { 00:17:55.069 "lvol_store_uuid": "5719a2cf-c88e-43dd-abe7-dabfc6afc72a", 00:17:55.069 "base_bdev": "nvme0n1", 00:17:55.069 "thin_provision": true, 00:17:55.069 "num_allocated_clusters": 0, 00:17:55.069 "snapshot": false, 00:17:55.069 "clone": false, 00:17:55.069 "esnap_clone": false 00:17:55.069 } 00:17:55.069 } 00:17:55.069 } 00:17:55.069 ]' 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:55.069 09:50:33 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:55.327 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:55.327 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 597bf3de-05d8-48a5-8ce9-539fa5070c15 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.585 { 00:17:55.585 "name": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:55.585 "aliases": [ 00:17:55.585 "lvs/nvme0n1p0" 00:17:55.585 ], 00:17:55.585 "product_name": "Logical Volume", 00:17:55.585 "block_size": 4096, 00:17:55.585 "num_blocks": 26476544, 00:17:55.585 "uuid": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:55.585 "assigned_rate_limits": { 00:17:55.585 "rw_ios_per_sec": 0, 00:17:55.585 "rw_mbytes_per_sec": 0, 00:17:55.585 "r_mbytes_per_sec": 0, 00:17:55.585 "w_mbytes_per_sec": 0 00:17:55.585 }, 00:17:55.585 "claimed": false, 00:17:55.585 "zoned": false, 00:17:55.585 "supported_io_types": { 00:17:55.585 "read": true, 00:17:55.585 "write": true, 00:17:55.585 "unmap": true, 00:17:55.585 "flush": false, 00:17:55.585 "reset": true, 00:17:55.585 "nvme_admin": false, 00:17:55.585 "nvme_io": false, 00:17:55.585 "nvme_io_md": false, 00:17:55.585 "write_zeroes": true, 00:17:55.585 "zcopy": false, 00:17:55.585 "get_zone_info": false, 00:17:55.585 "zone_management": false, 00:17:55.585 "zone_append": false, 00:17:55.585 "compare": false, 00:17:55.585 "compare_and_write": false, 00:17:55.585 "abort": false, 00:17:55.585 "seek_hole": true, 00:17:55.585 "seek_data": true, 00:17:55.585 "copy": false, 00:17:55.585 "nvme_iov_md": false 00:17:55.585 }, 00:17:55.585 "driver_specific": { 00:17:55.585 "lvol": { 00:17:55.585 "lvol_store_uuid": "5719a2cf-c88e-43dd-abe7-dabfc6afc72a", 00:17:55.585 "base_bdev": "nvme0n1", 00:17:55.585 "thin_provision": true, 00:17:55.585 "num_allocated_clusters": 0, 00:17:55.585 "snapshot": false, 00:17:55.585 "clone": false, 00:17:55.585 "esnap_clone": false 00:17:55.585 } 00:17:55.585 } 00:17:55.585 } 00:17:55.585 ]' 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:55.585 09:50:34 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 597bf3de-05d8-48a5-8ce9-539fa5070c15 -c nvc0n1p0 --l2p_dram_limit 60 00:17:55.844 [2024-11-28 09:50:34.583195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.583236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.844 [2024-11-28 09:50:34.583249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.844 [2024-11-28 09:50:34.583256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.583317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.583325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.844 [2024-11-28 09:50:34.583334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:55.844 [2024-11-28 09:50:34.583340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.583368] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.844 [2024-11-28 09:50:34.583907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.844 [2024-11-28 09:50:34.583929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.583936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.844 [2024-11-28 09:50:34.583948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:17:55.844 [2024-11-28 09:50:34.583954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.583986] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fcf833b9-8d0e-472e-8881-5f8be59c697f 00:17:55.844 [2024-11-28 09:50:34.585318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.585349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:55.844 [2024-11-28 09:50:34.585359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:55.844 [2024-11-28 09:50:34.585368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.592288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.592318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.844 [2024-11-28 09:50:34.592326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.831 ms 00:17:55.844 [2024-11-28 09:50:34.592339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.592425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.592435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.844 [2024-11-28 09:50:34.592442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:55.844 [2024-11-28 09:50:34.592452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.592501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.592511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.844 [2024-11-28 09:50:34.592518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:55.844 [2024-11-28 09:50:34.592525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.592554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.844 [2024-11-28 09:50:34.595833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.595857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.844 [2024-11-28 09:50:34.595870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:17:55.844 [2024-11-28 09:50:34.595876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.595909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.595917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.844 [2024-11-28 09:50:34.595925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:55.844 [2024-11-28 09:50:34.595931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.844 [2024-11-28 09:50:34.595957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:55.844 [2024-11-28 09:50:34.596078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.844 [2024-11-28 09:50:34.596092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.844 [2024-11-28 09:50:34.596101] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.844 [2024-11-28 09:50:34.596111] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.844 [2024-11-28 09:50:34.596119] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.844 [2024-11-28 09:50:34.596127] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.844 [2024-11-28 09:50:34.596133] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.844 [2024-11-28 09:50:34.596141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.844 [2024-11-28 09:50:34.596146] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.844 [2024-11-28 09:50:34.596171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.844 [2024-11-28 09:50:34.596177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.845 [2024-11-28 09:50:34.596188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:55.845 [2024-11-28 09:50:34.596195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.845 [2024-11-28 09:50:34.596273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.845 [2024-11-28 09:50:34.596281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.845 [2024-11-28 09:50:34.596289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:55.845 [2024-11-28 09:50:34.596295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.845 [2024-11-28 09:50:34.596386] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.845 [2024-11-28 09:50:34.596397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.845 [2024-11-28 09:50:34.596406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.845 [2024-11-28 09:50:34.596425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.845 [2024-11-28 09:50:34.596445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.845 [2024-11-28 09:50:34.596457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.845 [2024-11-28 09:50:34.596462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.845 [2024-11-28 09:50:34.596468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.845 [2024-11-28 09:50:34.596474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.845 [2024-11-28 09:50:34.596481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:55.845 [2024-11-28 09:50:34.596487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.845 [2024-11-28 09:50:34.596506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.845 [2024-11-28 09:50:34.596525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.845 [2024-11-28 09:50:34.596545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.845 [2024-11-28 09:50:34.596562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.845 [2024-11-28 09:50:34.596579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.845 [2024-11-28 09:50:34.596598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.845 [2024-11-28 09:50:34.596621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.845 [2024-11-28 09:50:34.596626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:55.845 [2024-11-28 09:50:34.596632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.845 [2024-11-28 09:50:34.596637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.845 [2024-11-28 09:50:34.596644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:55.845 [2024-11-28 09:50:34.596650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.845 [2024-11-28 09:50:34.596661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:55.845 [2024-11-28 09:50:34.596668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596673] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.845 [2024-11-28 09:50:34.596681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.845 [2024-11-28 09:50:34.596686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.845 [2024-11-28 09:50:34.596700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.845 [2024-11-28 09:50:34.596709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.845 [2024-11-28 09:50:34.596716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.845 [2024-11-28 09:50:34.596723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.845 [2024-11-28 09:50:34.596729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.845 [2024-11-28 09:50:34.596735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.845 [2024-11-28 09:50:34.596743] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.845 [2024-11-28 09:50:34.596754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.845 [2024-11-28 09:50:34.596768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:55.845 [2024-11-28 09:50:34.596774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:55.845 [2024-11-28 09:50:34.596781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:55.845 [2024-11-28 09:50:34.596788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:55.845 [2024-11-28 09:50:34.596794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:55.845 [2024-11-28 09:50:34.596799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:55.845 [2024-11-28 09:50:34.596806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:55.845 [2024-11-28 09:50:34.596811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:55.845 [2024-11-28 09:50:34.596820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:55.845 [2024-11-28 09:50:34.596852] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.845 [2024-11-28 09:50:34.596861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.845 [2024-11-28 09:50:34.596874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.845 [2024-11-28 09:50:34.596879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.845 [2024-11-28 09:50:34.596886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.845 [2024-11-28 09:50:34.596907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.845 [2024-11-28 09:50:34.596914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.845 [2024-11-28 09:50:34.596921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:17:55.845 [2024-11-28 09:50:34.596927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.845 [2024-11-28 09:50:34.596994] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:55.845 [2024-11-28 09:50:34.597006] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:59.126 [2024-11-28 09:50:37.620777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.621013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:59.126 [2024-11-28 09:50:37.621035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3023.769 ms 00:17:59.126 [2024-11-28 09:50:37.621047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.649067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.649112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.126 [2024-11-28 09:50:37.649125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.789 ms 00:17:59.126 [2024-11-28 09:50:37.649135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.649283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.649297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.126 [2024-11-28 09:50:37.649306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:59.126 [2024-11-28 09:50:37.649318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.692628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.692674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.126 [2024-11-28 09:50:37.692687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.249 ms 00:17:59.126 [2024-11-28 09:50:37.692698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.692743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.692754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.126 [2024-11-28 09:50:37.692763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:59.126 [2024-11-28 09:50:37.692772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.693239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.693259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.126 [2024-11-28 09:50:37.693271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:59.126 [2024-11-28 09:50:37.693281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.693409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.693421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.126 [2024-11-28 09:50:37.693429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:59.126 [2024-11-28 09:50:37.693440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.709355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.709388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.126 [2024-11-28 09:50:37.709399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.891 ms 00:17:59.126 [2024-11-28 09:50:37.709409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.721742] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:59.126 [2024-11-28 09:50:37.738679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.738711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:59.126 [2024-11-28 09:50:37.738726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.180 ms 00:17:59.126 [2024-11-28 09:50:37.738734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.787523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.787713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:59.126 [2024-11-28 09:50:37.787738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.752 ms 00:17:59.126 [2024-11-28 09:50:37.787747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.126 [2024-11-28 09:50:37.787991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.126 [2024-11-28 09:50:37.788015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:59.127 [2024-11-28 09:50:37.788029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:59.127 [2024-11-28 09:50:37.788037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.811021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.811162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:59.127 [2024-11-28 09:50:37.811182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.925 ms 00:17:59.127 [2024-11-28 09:50:37.811191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.833475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.833506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:59.127 [2024-11-28 09:50:37.833520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.202 ms 00:17:59.127 [2024-11-28 09:50:37.833527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.834105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.834127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.127 [2024-11-28 09:50:37.834138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:17:59.127 [2024-11-28 09:50:37.834145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.907481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.907522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:59.127 [2024-11-28 09:50:37.907543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.273 ms 00:17:59.127 [2024-11-28 09:50:37.907552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.931652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.931686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:59.127 [2024-11-28 09:50:37.931700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.009 ms 00:17:59.127 [2024-11-28 09:50:37.931708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.954357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.954388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:59.127 [2024-11-28 09:50:37.954401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.605 ms 00:17:59.127 [2024-11-28 09:50:37.954409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.977467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.977613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.127 [2024-11-28 09:50:37.977632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.011 ms 00:17:59.127 [2024-11-28 09:50:37.977640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.977684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.977693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.127 [2024-11-28 09:50:37.977709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:59.127 [2024-11-28 09:50:37.977717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.977811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.127 [2024-11-28 09:50:37.977821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.127 [2024-11-28 09:50:37.977831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:59.127 [2024-11-28 09:50:37.977839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.127 [2024-11-28 09:50:37.978837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3395.181 ms, result 0 00:17:59.127 { 00:17:59.127 "name": "ftl0", 00:17:59.127 "uuid": "fcf833b9-8d0e-472e-8881-5f8be59c697f" 00:17:59.127 } 00:17:59.127 09:50:37 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:59.127 09:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:59.127 09:50:37 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:59.127 09:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:59.127 09:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:59.127 09:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:59.127 09:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:59.385 09:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:59.643 [ 00:17:59.643 { 00:17:59.643 "name": "ftl0", 00:17:59.643 "aliases": [ 00:17:59.643 "fcf833b9-8d0e-472e-8881-5f8be59c697f" 00:17:59.643 ], 00:17:59.643 "product_name": "FTL disk", 00:17:59.643 "block_size": 4096, 00:17:59.643 "num_blocks": 20971520, 00:17:59.643 "uuid": "fcf833b9-8d0e-472e-8881-5f8be59c697f", 00:17:59.643 "assigned_rate_limits": { 00:17:59.643 "rw_ios_per_sec": 0, 00:17:59.643 "rw_mbytes_per_sec": 0, 00:17:59.643 "r_mbytes_per_sec": 0, 00:17:59.643 "w_mbytes_per_sec": 0 00:17:59.643 }, 00:17:59.643 "claimed": false, 00:17:59.643 "zoned": false, 00:17:59.643 "supported_io_types": { 00:17:59.643 "read": true, 00:17:59.643 "write": true, 00:17:59.643 "unmap": true, 00:17:59.643 "flush": true, 00:17:59.643 "reset": false, 00:17:59.643 "nvme_admin": false, 00:17:59.643 "nvme_io": false, 00:17:59.643 "nvme_io_md": false, 00:17:59.643 "write_zeroes": true, 00:17:59.643 "zcopy": false, 00:17:59.643 "get_zone_info": false, 00:17:59.643 "zone_management": false, 00:17:59.643 "zone_append": false, 00:17:59.643 "compare": false, 00:17:59.643 "compare_and_write": false, 00:17:59.643 "abort": false, 00:17:59.643 "seek_hole": false, 00:17:59.643 "seek_data": false, 00:17:59.643 "copy": false, 00:17:59.643 "nvme_iov_md": false 00:17:59.643 }, 00:17:59.643 "driver_specific": { 00:17:59.643 "ftl": { 00:17:59.643 "base_bdev": "597bf3de-05d8-48a5-8ce9-539fa5070c15", 00:17:59.643 "cache": "nvc0n1p0" 00:17:59.643 } 00:17:59.643 } 00:17:59.643 } 00:17:59.643 ] 00:17:59.643 09:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:59.643 09:50:38 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:59.643 09:50:38 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.902 09:50:38 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:59.902 09:50:38 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:00.161 [2024-11-28 09:50:38.827577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.827615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:00.161 [2024-11-28 09:50:38.827626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.161 [2024-11-28 09:50:38.827636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.827663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.161 [2024-11-28 09:50:38.829903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.829926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:00.161 [2024-11-28 09:50:38.829939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.223 ms 00:18:00.161 [2024-11-28 09:50:38.829946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.830360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.830375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:00.161 [2024-11-28 09:50:38.830384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:00.161 [2024-11-28 09:50:38.830390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.832825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.832845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:00.161 [2024-11-28 09:50:38.832854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:18:00.161 [2024-11-28 09:50:38.832860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.837552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.837574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:00.161 [2024-11-28 09:50:38.837583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.663 ms 00:18:00.161 [2024-11-28 09:50:38.837589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.855568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.855709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:00.161 [2024-11-28 09:50:38.855737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.914 ms 00:18:00.161 [2024-11-28 09:50:38.855743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.867729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.867758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:00.161 [2024-11-28 09:50:38.867772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.948 ms 00:18:00.161 [2024-11-28 09:50:38.867779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.867936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.867946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:00.161 [2024-11-28 09:50:38.867955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:00.161 [2024-11-28 09:50:38.867961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.161 [2024-11-28 09:50:38.885700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.161 [2024-11-28 09:50:38.885725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:00.162 [2024-11-28 09:50:38.885734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.715 ms 00:18:00.162 [2024-11-28 09:50:38.885740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.162 [2024-11-28 09:50:38.903026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.162 [2024-11-28 09:50:38.903138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:00.162 [2024-11-28 09:50:38.903171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.252 ms 00:18:00.162 [2024-11-28 09:50:38.903177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.162 [2024-11-28 09:50:38.919990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.162 [2024-11-28 09:50:38.920073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:00.162 [2024-11-28 09:50:38.920117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.780 ms 00:18:00.162 [2024-11-28 09:50:38.920134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.162 [2024-11-28 09:50:38.937219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.162 [2024-11-28 09:50:38.937309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:00.162 [2024-11-28 09:50:38.937349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.951 ms 00:18:00.162 [2024-11-28 09:50:38.937367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.162 [2024-11-28 09:50:38.937406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:00.162 [2024-11-28 09:50:38.937688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.937972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.938976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:00.162 [2024-11-28 09:50:38.939676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.939987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.940926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:00.163 [2024-11-28 09:50:38.941280] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:00.163 [2024-11-28 09:50:38.941291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fcf833b9-8d0e-472e-8881-5f8be59c697f 00:18:00.163 [2024-11-28 09:50:38.941299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:00.163 [2024-11-28 09:50:38.941308] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:00.163 [2024-11-28 09:50:38.941316] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:00.163 [2024-11-28 09:50:38.941324] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:00.163 [2024-11-28 09:50:38.941330] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:00.163 [2024-11-28 09:50:38.941338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:00.163 [2024-11-28 09:50:38.941344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:00.163 [2024-11-28 09:50:38.941351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:00.163 [2024-11-28 09:50:38.941356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:00.163 [2024-11-28 09:50:38.941364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.163 [2024-11-28 09:50:38.941371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:00.163 [2024-11-28 09:50:38.941380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:18:00.163 [2024-11-28 09:50:38.941386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.163 [2024-11-28 09:50:38.951244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.163 [2024-11-28 09:50:38.951270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:00.163 [2024-11-28 09:50:38.951280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.813 ms 00:18:00.163 [2024-11-28 09:50:38.951287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.163 [2024-11-28 09:50:38.951604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.163 [2024-11-28 09:50:38.951619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:00.163 [2024-11-28 09:50:38.951627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:18:00.163 [2024-11-28 09:50:38.951634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.163 [2024-11-28 09:50:38.988177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.163 [2024-11-28 09:50:38.988204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.163 [2024-11-28 09:50:38.988216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.163 [2024-11-28 09:50:38.988223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.163 [2024-11-28 09:50:38.988275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.163 [2024-11-28 09:50:38.988281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.163 [2024-11-28 09:50:38.988290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.163 [2024-11-28 09:50:38.988296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.163 [2024-11-28 09:50:38.988378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.163 [2024-11-28 09:50:38.988389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.163 [2024-11-28 09:50:38.988397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.163 [2024-11-28 09:50:38.988403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.163 [2024-11-28 09:50:38.988426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.163 [2024-11-28 09:50:38.988432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.163 [2024-11-28 09:50:38.988440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.163 [2024-11-28 09:50:38.988445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.054196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.054334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.422 [2024-11-28 09:50:39.054350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.054357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.422 [2024-11-28 09:50:39.105167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.422 [2024-11-28 09:50:39.105277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.422 [2024-11-28 09:50:39.105387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.422 [2024-11-28 09:50:39.105502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.422 [2024-11-28 09:50:39.105572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.422 [2024-11-28 09:50:39.105642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.422 [2024-11-28 09:50:39.105711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.422 [2024-11-28 09:50:39.105720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.422 [2024-11-28 09:50:39.105726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.422 [2024-11-28 09:50:39.105871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.264 ms, result 0 00:18:00.422 true 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75084 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75084 ']' 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75084 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75084 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.422 killing process with pid 75084 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75084' 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75084 00:18:00.422 09:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75084 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:06.981 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:06.982 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:06.982 09:50:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.982 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:06.982 fio-3.35 00:18:06.982 Starting 1 thread 00:18:12.267 00:18:12.267 test: (groupid=0, jobs=1): err= 0: pid=75267: Thu Nov 28 09:50:50 2024 00:18:12.267 read: IOPS=891, BW=59.2MiB/s (62.1MB/s)(255MiB/4298msec) 00:18:12.267 slat (nsec): min=4133, max=32249, avg=7408.67, stdev=3689.06 00:18:12.267 clat (usec): min=275, max=1397, avg=507.49, stdev=246.78 00:18:12.267 lat (usec): min=280, max=1408, avg=514.90, stdev=249.41 00:18:12.267 clat percentiles (usec): 00:18:12.267 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], 00:18:12.267 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 441], 00:18:12.267 | 70.00th=[ 570], 80.00th=[ 857], 90.00th=[ 938], 95.00th=[ 963], 00:18:12.267 | 99.00th=[ 1074], 99.50th=[ 1172], 99.90th=[ 1237], 99.95th=[ 1336], 00:18:12.267 | 99.99th=[ 1401] 00:18:12.267 write: IOPS=897, BW=59.6MiB/s (62.5MB/s)(256MiB/4295msec); 0 zone resets 00:18:12.267 slat (nsec): min=14864, max=64684, avg=21735.08, stdev=5465.15 00:18:12.267 clat (usec): min=303, max=3004, avg=567.50, stdev=283.50 00:18:12.267 lat (usec): min=325, max=3022, avg=589.24, stdev=287.24 00:18:12.267 clat percentiles (usec): 00:18:12.267 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 351], 00:18:12.267 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 478], 00:18:12.267 | 70.00th=[ 668], 80.00th=[ 955], 90.00th=[ 1029], 95.00th=[ 1057], 00:18:12.267 | 99.00th=[ 1270], 99.50th=[ 1352], 99.90th=[ 1827], 99.95th=[ 2008], 00:18:12.267 | 99.99th=[ 2999] 00:18:12.267 bw ( KiB/s): min=34952, max=91528, per=97.21%, avg=59347.00, stdev=24801.16, samples=8 00:18:12.267 iops : min= 514, max= 1346, avg=872.75, stdev=364.72, samples=8 00:18:12.267 lat (usec) : 500=62.24%, 750=14.24%, 1000=15.80% 00:18:12.267 lat (msec) : 2=7.69%, 4=0.03% 00:18:12.267 cpu : usr=99.00%, sys=0.14%, ctx=7, majf=0, minf=1169 00:18:12.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:12.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.267 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:12.267 00:18:12.267 Run status group 0 (all jobs): 00:18:12.267 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=255MiB (267MB), run=4298-4298msec 00:18:12.267 WRITE: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=256MiB (269MB), run=4295-4295msec 00:18:13.208 ----------------------------------------------------- 00:18:13.208 Suppressions used: 00:18:13.208 count bytes template 00:18:13.208 1 5 /usr/src/fio/parse.c 00:18:13.208 1 8 libtcmalloc_minimal.so 00:18:13.208 1 904 libcrypto.so 00:18:13.208 ----------------------------------------------------- 00:18:13.208 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:13.208 09:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:13.469 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:13.469 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:13.469 fio-3.35 00:18:13.469 Starting 2 threads 00:18:40.099 00:18:40.099 first_half: (groupid=0, jobs=1): err= 0: pid=75370: Thu Nov 28 09:51:17 2024 00:18:40.099 read: IOPS=2699, BW=10.5MiB/s (11.1MB/s)(255MiB/24171msec) 00:18:40.099 slat (nsec): min=3107, max=23302, avg=4391.19, stdev=1122.09 00:18:40.099 clat (usec): min=676, max=423462, avg=36538.07, stdev=22822.93 00:18:40.099 lat (usec): min=681, max=423466, avg=36542.46, stdev=22823.04 00:18:40.099 clat percentiles (msec): 00:18:40.099 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:40.099 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:40.099 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 61], 00:18:40.099 | 99.00th=[ 155], 99.50th=[ 182], 99.90th=[ 253], 99.95th=[ 330], 00:18:40.099 | 99.99th=[ 405] 00:18:40.099 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(256MiB/20272msec); 0 zone resets 00:18:40.099 slat (usec): min=3, max=2874, avg= 6.59, stdev=19.83 00:18:40.099 clat (usec): min=347, max=113279, avg=10809.19, stdev=18716.41 00:18:40.099 lat (usec): min=353, max=113286, avg=10815.78, stdev=18716.61 00:18:40.099 clat percentiles (usec): 00:18:40.099 | 1.00th=[ 734], 5.00th=[ 988], 10.00th=[ 1221], 20.00th=[ 1631], 00:18:40.099 | 30.00th=[ 2638], 40.00th=[ 4047], 50.00th=[ 4948], 60.00th=[ 5735], 00:18:40.099 | 70.00th=[ 7373], 80.00th=[ 12780], 90.00th=[ 18744], 95.00th=[ 64750], 00:18:40.099 | 99.00th=[ 88605], 99.50th=[ 90702], 99.90th=[ 96994], 99.95th=[ 99091], 00:18:40.099 | 99.99th=[111674] 00:18:40.099 bw ( KiB/s): min= 1008, max=41176, per=90.92%, avg=22792.48, stdev=11119.24, samples=23 00:18:40.099 iops : min= 252, max=10294, avg=5698.09, stdev=2779.79, samples=23 00:18:40.099 lat (usec) : 500=0.03%, 750=0.59%, 1000=2.04% 00:18:40.099 lat (msec) : 2=10.64%, 4=6.90%, 10=18.01%, 20=8.30%, 50=47.43% 00:18:40.099 lat (msec) : 100=4.68%, 250=1.33%, 500=0.05% 00:18:40.099 cpu : usr=99.31%, sys=0.16%, ctx=33, majf=0, minf=5587 00:18:40.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:40.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.100 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.100 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.100 second_half: (groupid=0, jobs=1): err= 0: pid=75371: Thu Nov 28 09:51:17 2024 00:18:40.100 read: IOPS=2676, BW=10.5MiB/s (11.0MB/s)(255MiB/24404msec) 00:18:40.100 slat (usec): min=3, max=980, avg= 5.30, stdev= 4.13 00:18:40.100 clat (usec): min=724, max=437874, avg=35948.32, stdev=24165.50 00:18:40.100 lat (usec): min=730, max=437880, avg=35953.62, stdev=24165.65 00:18:40.100 clat percentiles (msec): 00:18:40.100 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:40.100 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:40.100 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 47], 00:18:40.100 | 99.00th=[ 167], 99.50th=[ 205], 99.90th=[ 288], 99.95th=[ 309], 00:18:40.100 | 99.99th=[ 435] 00:18:40.100 write: IOPS=3133, BW=12.2MiB/s (12.8MB/s)(256MiB/20914msec); 0 zone resets 00:18:40.100 slat (usec): min=3, max=1306, avg= 7.24, stdev= 7.01 00:18:40.100 clat (usec): min=371, max=113982, avg=11809.15, stdev=19756.27 00:18:40.100 lat (usec): min=382, max=113987, avg=11816.38, stdev=19756.39 00:18:40.100 clat percentiles (usec): 00:18:40.100 | 1.00th=[ 725], 5.00th=[ 1074], 10.00th=[ 1287], 20.00th=[ 1631], 00:18:40.100 | 30.00th=[ 2966], 40.00th=[ 4080], 50.00th=[ 5211], 60.00th=[ 6063], 00:18:40.100 | 70.00th=[ 7898], 80.00th=[ 14222], 90.00th=[ 23200], 95.00th=[ 73925], 00:18:40.100 | 99.00th=[ 89654], 99.50th=[ 91751], 99.90th=[ 98042], 99.95th=[101188], 00:18:40.100 | 99.99th=[112722] 00:18:40.100 bw ( KiB/s): min= 336, max=51080, per=83.65%, avg=20971.52, stdev=14789.68, samples=25 00:18:40.100 iops : min= 84, max=12770, avg=5242.88, stdev=3697.42, samples=25 00:18:40.100 lat (usec) : 500=0.02%, 750=0.57%, 1000=1.42% 00:18:40.100 lat (msec) : 2=10.47%, 4=7.48%, 10=17.67%, 20=8.52%, 50=48.28% 00:18:40.100 lat (msec) : 100=4.23%, 250=1.26%, 500=0.09% 00:18:40.100 cpu : usr=98.46%, sys=0.34%, ctx=120, majf=0, minf=5526 00:18:40.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:40.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.100 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.100 issued rwts: total=65327,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.100 00:18:40.100 Run status group 0 (all jobs): 00:18:40.100 READ: bw=20.9MiB/s (21.9MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.1MB/s), io=510MiB (535MB), run=24171-24404msec 00:18:40.100 WRITE: bw=24.5MiB/s (25.7MB/s), 12.2MiB/s-12.6MiB/s (12.8MB/s-13.2MB/s), io=512MiB (537MB), run=20272-20914msec 00:18:40.673 ----------------------------------------------------- 00:18:40.673 Suppressions used: 00:18:40.673 count bytes template 00:18:40.673 2 10 /usr/src/fio/parse.c 00:18:40.673 2 192 /usr/src/fio/iolog.c 00:18:40.673 1 8 libtcmalloc_minimal.so 00:18:40.673 1 904 libcrypto.so 00:18:40.673 ----------------------------------------------------- 00:18:40.673 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:40.673 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:40.674 09:51:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.935 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:40.935 fio-3.35 00:18:40.935 Starting 1 thread 00:18:59.053 00:18:59.053 test: (groupid=0, jobs=1): err= 0: pid=75689: Thu Nov 28 09:51:34 2024 00:18:59.053 read: IOPS=7945, BW=31.0MiB/s (32.5MB/s)(255MiB/8206msec) 00:18:59.053 slat (nsec): min=3087, max=62867, avg=4934.07, stdev=1197.90 00:18:59.053 clat (usec): min=559, max=32269, avg=16100.20, stdev=1929.64 00:18:59.053 lat (usec): min=564, max=32274, avg=16105.13, stdev=1929.67 00:18:59.053 clat percentiles (usec): 00:18:59.053 | 1.00th=[13829], 5.00th=[14484], 10.00th=[14615], 20.00th=[15139], 00:18:59.053 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:18:59.053 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16712], 95.00th=[20055], 00:18:59.053 | 99.00th=[25297], 99.50th=[27132], 99.90th=[30278], 99.95th=[31065], 00:18:59.053 | 99.99th=[31851] 00:18:59.053 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(256MiB/5689msec); 0 zone resets 00:18:59.053 slat (usec): min=4, max=717, avg= 7.37, stdev= 4.33 00:18:59.053 clat (usec): min=534, max=50073, avg=11062.67, stdev=12554.98 00:18:59.053 lat (usec): min=541, max=50085, avg=11070.04, stdev=12555.21 00:18:59.053 clat percentiles (usec): 00:18:59.053 | 1.00th=[ 725], 5.00th=[ 930], 10.00th=[ 1090], 20.00th=[ 1270], 00:18:59.053 | 30.00th=[ 1434], 40.00th=[ 1876], 50.00th=[ 6718], 60.00th=[ 8455], 00:18:59.053 | 70.00th=[13304], 80.00th=[17957], 90.00th=[35914], 95.00th=[38536], 00:18:59.053 | 99.00th=[42206], 99.50th=[43779], 99.90th=[46400], 99.95th=[47973], 00:18:59.053 | 99.99th=[49546] 00:18:59.053 bw ( KiB/s): min=17480, max=68103, per=94.76%, avg=43663.17, stdev=13844.46, samples=12 00:18:59.053 iops : min= 4370, max=17025, avg=10915.67, stdev=3461.07, samples=12 00:18:59.053 lat (usec) : 750=0.72%, 1000=2.75% 00:18:59.053 lat (msec) : 2=16.77%, 4=0.86%, 10=11.16%, 20=56.58%, 50=11.17% 00:18:59.053 lat (msec) : 100=0.01% 00:18:59.053 cpu : usr=99.04%, sys=0.17%, ctx=30, majf=0, minf=5565 00:18:59.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:59.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.053 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.053 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.053 00:18:59.053 Run status group 0 (all jobs): 00:18:59.053 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8206-8206msec 00:18:59.053 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=256MiB (268MB), run=5689-5689msec 00:18:59.053 ----------------------------------------------------- 00:18:59.053 Suppressions used: 00:18:59.053 count bytes template 00:18:59.053 1 5 /usr/src/fio/parse.c 00:18:59.053 2 192 /usr/src/fio/iolog.c 00:18:59.053 1 8 libtcmalloc_minimal.so 00:18:59.053 1 904 libcrypto.so 00:18:59.053 ----------------------------------------------------- 00:18:59.053 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:59.053 Remove shared memory files 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57124 /dev/shm/spdk_tgt_trace.pid74001 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:59.053 ************************************ 00:18:59.053 END TEST ftl_fio_basic 00:18:59.053 ************************************ 00:18:59.053 00:18:59.053 real 1m5.634s 00:18:59.053 user 2m20.999s 00:18:59.053 sys 0m3.226s 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.053 09:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.053 09:51:36 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:59.053 09:51:36 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:59.053 09:51:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.053 09:51:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:59.053 ************************************ 00:18:59.053 START TEST ftl_bdevperf 00:18:59.053 ************************************ 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:59.053 * Looking for test storage... 00:18:59.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.053 --rc genhtml_branch_coverage=1 00:18:59.053 --rc genhtml_function_coverage=1 00:18:59.053 --rc genhtml_legend=1 00:18:59.053 --rc geninfo_all_blocks=1 00:18:59.053 --rc geninfo_unexecuted_blocks=1 00:18:59.053 00:18:59.053 ' 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.053 --rc genhtml_branch_coverage=1 00:18:59.053 --rc genhtml_function_coverage=1 00:18:59.053 --rc genhtml_legend=1 00:18:59.053 --rc geninfo_all_blocks=1 00:18:59.053 --rc geninfo_unexecuted_blocks=1 00:18:59.053 00:18:59.053 ' 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.053 --rc genhtml_branch_coverage=1 00:18:59.053 --rc genhtml_function_coverage=1 00:18:59.053 --rc genhtml_legend=1 00:18:59.053 --rc geninfo_all_blocks=1 00:18:59.053 --rc geninfo_unexecuted_blocks=1 00:18:59.053 00:18:59.053 ' 00:18:59.053 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.053 --rc genhtml_branch_coverage=1 00:18:59.053 --rc genhtml_function_coverage=1 00:18:59.053 --rc genhtml_legend=1 00:18:59.053 --rc geninfo_all_blocks=1 00:18:59.053 --rc geninfo_unexecuted_blocks=1 00:18:59.053 00:18:59.053 ' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75933 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75933 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75933 ']' 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.054 09:51:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:59.054 [2024-11-28 09:51:36.791889] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:59.054 [2024-11-28 09:51:36.792244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75933 ] 00:18:59.054 [2024-11-28 09:51:36.948905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.054 [2024-11-28 09:51:37.045061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.054 09:51:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:59.313 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.313 { 00:18:59.313 "name": "nvme0n1", 00:18:59.313 "aliases": [ 00:18:59.313 "88b4682e-6629-4171-b9cd-2c6209af0078" 00:18:59.313 ], 00:18:59.313 "product_name": "NVMe disk", 00:18:59.313 "block_size": 4096, 00:18:59.313 "num_blocks": 1310720, 00:18:59.313 "uuid": "88b4682e-6629-4171-b9cd-2c6209af0078", 00:18:59.313 "numa_id": -1, 00:18:59.313 "assigned_rate_limits": { 00:18:59.313 "rw_ios_per_sec": 0, 00:18:59.313 "rw_mbytes_per_sec": 0, 00:18:59.314 "r_mbytes_per_sec": 0, 00:18:59.314 "w_mbytes_per_sec": 0 00:18:59.314 }, 00:18:59.314 "claimed": true, 00:18:59.314 "claim_type": "read_many_write_one", 00:18:59.314 "zoned": false, 00:18:59.314 "supported_io_types": { 00:18:59.314 "read": true, 00:18:59.314 "write": true, 00:18:59.314 "unmap": true, 00:18:59.314 "flush": true, 00:18:59.314 "reset": true, 00:18:59.314 "nvme_admin": true, 00:18:59.314 "nvme_io": true, 00:18:59.314 "nvme_io_md": false, 00:18:59.314 "write_zeroes": true, 00:18:59.314 "zcopy": false, 00:18:59.314 "get_zone_info": false, 00:18:59.314 "zone_management": false, 00:18:59.314 "zone_append": false, 00:18:59.314 "compare": true, 00:18:59.314 "compare_and_write": false, 00:18:59.314 "abort": true, 00:18:59.314 "seek_hole": false, 00:18:59.314 "seek_data": false, 00:18:59.314 "copy": true, 00:18:59.314 "nvme_iov_md": false 00:18:59.314 }, 00:18:59.314 "driver_specific": { 00:18:59.314 "nvme": [ 00:18:59.314 { 00:18:59.314 "pci_address": "0000:00:11.0", 00:18:59.314 "trid": { 00:18:59.314 "trtype": "PCIe", 00:18:59.314 "traddr": "0000:00:11.0" 00:18:59.314 }, 00:18:59.314 "ctrlr_data": { 00:18:59.314 "cntlid": 0, 00:18:59.314 "vendor_id": "0x1b36", 00:18:59.314 "model_number": "QEMU NVMe Ctrl", 00:18:59.314 "serial_number": "12341", 00:18:59.314 "firmware_revision": "8.0.0", 00:18:59.314 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:59.314 "oacs": { 00:18:59.314 "security": 0, 00:18:59.314 "format": 1, 00:18:59.314 "firmware": 0, 00:18:59.314 "ns_manage": 1 00:18:59.314 }, 00:18:59.314 "multi_ctrlr": false, 00:18:59.314 "ana_reporting": false 00:18:59.314 }, 00:18:59.314 "vs": { 00:18:59.314 "nvme_version": "1.4" 00:18:59.314 }, 00:18:59.314 "ns_data": { 00:18:59.314 "id": 1, 00:18:59.314 "can_share": false 00:18:59.314 } 00:18:59.314 } 00:18:59.314 ], 00:18:59.314 "mp_policy": "active_passive" 00:18:59.314 } 00:18:59.314 } 00:18:59.314 ]' 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:59.314 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:59.575 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5719a2cf-c88e-43dd-abe7-dabfc6afc72a 00:18:59.575 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:59.575 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5719a2cf-c88e-43dd-abe7-dabfc6afc72a 00:18:59.835 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=be0b3982-a589-40b1-aeb5-4fd55a76b974 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u be0b3982-a589-40b1-aeb5-4fd55a76b974 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:00.094 09:51:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:00.355 { 00:19:00.355 "name": "a29684f5-3bd3-4531-9aad-6b7490f81063", 00:19:00.355 "aliases": [ 00:19:00.355 "lvs/nvme0n1p0" 00:19:00.355 ], 00:19:00.355 "product_name": "Logical Volume", 00:19:00.355 "block_size": 4096, 00:19:00.355 "num_blocks": 26476544, 00:19:00.355 "uuid": "a29684f5-3bd3-4531-9aad-6b7490f81063", 00:19:00.355 "assigned_rate_limits": { 00:19:00.355 "rw_ios_per_sec": 0, 00:19:00.355 "rw_mbytes_per_sec": 0, 00:19:00.355 "r_mbytes_per_sec": 0, 00:19:00.355 "w_mbytes_per_sec": 0 00:19:00.355 }, 00:19:00.355 "claimed": false, 00:19:00.355 "zoned": false, 00:19:00.355 "supported_io_types": { 00:19:00.355 "read": true, 00:19:00.355 "write": true, 00:19:00.355 "unmap": true, 00:19:00.355 "flush": false, 00:19:00.355 "reset": true, 00:19:00.355 "nvme_admin": false, 00:19:00.355 "nvme_io": false, 00:19:00.355 "nvme_io_md": false, 00:19:00.355 "write_zeroes": true, 00:19:00.355 "zcopy": false, 00:19:00.355 "get_zone_info": false, 00:19:00.355 "zone_management": false, 00:19:00.355 "zone_append": false, 00:19:00.355 "compare": false, 00:19:00.355 "compare_and_write": false, 00:19:00.355 "abort": false, 00:19:00.355 "seek_hole": true, 00:19:00.355 "seek_data": true, 00:19:00.355 "copy": false, 00:19:00.355 "nvme_iov_md": false 00:19:00.355 }, 00:19:00.355 "driver_specific": { 00:19:00.355 "lvol": { 00:19:00.355 "lvol_store_uuid": "be0b3982-a589-40b1-aeb5-4fd55a76b974", 00:19:00.355 "base_bdev": "nvme0n1", 00:19:00.355 "thin_provision": true, 00:19:00.355 "num_allocated_clusters": 0, 00:19:00.355 "snapshot": false, 00:19:00.355 "clone": false, 00:19:00.355 "esnap_clone": false 00:19:00.355 } 00:19:00.355 } 00:19:00.355 } 00:19:00.355 ]' 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:00.355 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:00.616 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:00.878 { 00:19:00.878 "name": "a29684f5-3bd3-4531-9aad-6b7490f81063", 00:19:00.878 "aliases": [ 00:19:00.878 "lvs/nvme0n1p0" 00:19:00.878 ], 00:19:00.878 "product_name": "Logical Volume", 00:19:00.878 "block_size": 4096, 00:19:00.878 "num_blocks": 26476544, 00:19:00.878 "uuid": "a29684f5-3bd3-4531-9aad-6b7490f81063", 00:19:00.878 "assigned_rate_limits": { 00:19:00.878 "rw_ios_per_sec": 0, 00:19:00.878 "rw_mbytes_per_sec": 0, 00:19:00.878 "r_mbytes_per_sec": 0, 00:19:00.878 "w_mbytes_per_sec": 0 00:19:00.878 }, 00:19:00.878 "claimed": false, 00:19:00.878 "zoned": false, 00:19:00.878 "supported_io_types": { 00:19:00.878 "read": true, 00:19:00.878 "write": true, 00:19:00.878 "unmap": true, 00:19:00.878 "flush": false, 00:19:00.878 "reset": true, 00:19:00.878 "nvme_admin": false, 00:19:00.878 "nvme_io": false, 00:19:00.878 "nvme_io_md": false, 00:19:00.878 "write_zeroes": true, 00:19:00.878 "zcopy": false, 00:19:00.878 "get_zone_info": false, 00:19:00.878 "zone_management": false, 00:19:00.878 "zone_append": false, 00:19:00.878 "compare": false, 00:19:00.878 "compare_and_write": false, 00:19:00.878 "abort": false, 00:19:00.878 "seek_hole": true, 00:19:00.878 "seek_data": true, 00:19:00.878 "copy": false, 00:19:00.878 "nvme_iov_md": false 00:19:00.878 }, 00:19:00.878 "driver_specific": { 00:19:00.878 "lvol": { 00:19:00.878 "lvol_store_uuid": "be0b3982-a589-40b1-aeb5-4fd55a76b974", 00:19:00.878 "base_bdev": "nvme0n1", 00:19:00.878 "thin_provision": true, 00:19:00.878 "num_allocated_clusters": 0, 00:19:00.878 "snapshot": false, 00:19:00.878 "clone": false, 00:19:00.878 "esnap_clone": false 00:19:00.878 } 00:19:00.878 } 00:19:00.878 } 00:19:00.878 ]' 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:00.878 09:51:39 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:01.139 09:51:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a29684f5-3bd3-4531-9aad-6b7490f81063 00:19:01.400 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:01.400 { 00:19:01.400 "name": "a29684f5-3bd3-4531-9aad-6b7490f81063", 00:19:01.400 "aliases": [ 00:19:01.400 "lvs/nvme0n1p0" 00:19:01.400 ], 00:19:01.400 "product_name": "Logical Volume", 00:19:01.400 "block_size": 4096, 00:19:01.400 "num_blocks": 26476544, 00:19:01.400 "uuid": "a29684f5-3bd3-4531-9aad-6b7490f81063", 00:19:01.400 "assigned_rate_limits": { 00:19:01.400 "rw_ios_per_sec": 0, 00:19:01.400 "rw_mbytes_per_sec": 0, 00:19:01.400 "r_mbytes_per_sec": 0, 00:19:01.400 "w_mbytes_per_sec": 0 00:19:01.400 }, 00:19:01.400 "claimed": false, 00:19:01.400 "zoned": false, 00:19:01.400 "supported_io_types": { 00:19:01.400 "read": true, 00:19:01.400 "write": true, 00:19:01.400 "unmap": true, 00:19:01.400 "flush": false, 00:19:01.400 "reset": true, 00:19:01.400 "nvme_admin": false, 00:19:01.400 "nvme_io": false, 00:19:01.400 "nvme_io_md": false, 00:19:01.400 "write_zeroes": true, 00:19:01.400 "zcopy": false, 00:19:01.400 "get_zone_info": false, 00:19:01.400 "zone_management": false, 00:19:01.400 "zone_append": false, 00:19:01.400 "compare": false, 00:19:01.400 "compare_and_write": false, 00:19:01.400 "abort": false, 00:19:01.401 "seek_hole": true, 00:19:01.401 "seek_data": true, 00:19:01.401 "copy": false, 00:19:01.401 "nvme_iov_md": false 00:19:01.401 }, 00:19:01.401 "driver_specific": { 00:19:01.401 "lvol": { 00:19:01.401 "lvol_store_uuid": "be0b3982-a589-40b1-aeb5-4fd55a76b974", 00:19:01.401 "base_bdev": "nvme0n1", 00:19:01.401 "thin_provision": true, 00:19:01.401 "num_allocated_clusters": 0, 00:19:01.401 "snapshot": false, 00:19:01.401 "clone": false, 00:19:01.401 "esnap_clone": false 00:19:01.401 } 00:19:01.401 } 00:19:01.401 } 00:19:01.401 ]' 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:01.401 09:51:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a29684f5-3bd3-4531-9aad-6b7490f81063 -c nvc0n1p0 --l2p_dram_limit 20 00:19:01.663 [2024-11-28 09:51:40.333509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.333566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:01.663 [2024-11-28 09:51:40.333579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:01.663 [2024-11-28 09:51:40.333588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.333644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.333654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.663 [2024-11-28 09:51:40.333661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:01.663 [2024-11-28 09:51:40.333669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.333683] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:01.663 [2024-11-28 09:51:40.334264] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:01.663 [2024-11-28 09:51:40.334280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.334289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.663 [2024-11-28 09:51:40.334296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:19:01.663 [2024-11-28 09:51:40.334304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.334329] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2289be54-f286-4717-97b8-6a2da3e9d76a 00:19:01.663 [2024-11-28 09:51:40.335663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.335875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:01.663 [2024-11-28 09:51:40.335898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:01.663 [2024-11-28 09:51:40.335905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.342881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.342992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.663 [2024-11-28 09:51:40.343007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:19:01.663 [2024-11-28 09:51:40.343017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.343091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.343098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.663 [2024-11-28 09:51:40.343109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:01.663 [2024-11-28 09:51:40.343115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.343167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.343175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:01.663 [2024-11-28 09:51:40.343185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:01.663 [2024-11-28 09:51:40.343191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.343212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:01.663 [2024-11-28 09:51:40.346483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.346509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.663 [2024-11-28 09:51:40.346517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:19:01.663 [2024-11-28 09:51:40.346528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.346554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.346562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:01.663 [2024-11-28 09:51:40.346568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:01.663 [2024-11-28 09:51:40.346575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.346592] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:01.663 [2024-11-28 09:51:40.346711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:01.663 [2024-11-28 09:51:40.346722] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:01.663 [2024-11-28 09:51:40.346733] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:01.663 [2024-11-28 09:51:40.346741] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:01.663 [2024-11-28 09:51:40.346750] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:01.663 [2024-11-28 09:51:40.346757] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:01.663 [2024-11-28 09:51:40.346765] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:01.663 [2024-11-28 09:51:40.346772] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:01.663 [2024-11-28 09:51:40.346779] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:01.663 [2024-11-28 09:51:40.346788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.346796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:01.663 [2024-11-28 09:51:40.346802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:01.663 [2024-11-28 09:51:40.346810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.346874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.663 [2024-11-28 09:51:40.346884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:01.663 [2024-11-28 09:51:40.346889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:01.663 [2024-11-28 09:51:40.346898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.663 [2024-11-28 09:51:40.346969] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:01.663 [2024-11-28 09:51:40.346981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:01.663 [2024-11-28 09:51:40.346988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.663 [2024-11-28 09:51:40.346996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:01.663 [2024-11-28 09:51:40.347009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:01.663 [2024-11-28 09:51:40.347022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:01.663 [2024-11-28 09:51:40.347027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.663 [2024-11-28 09:51:40.347038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:01.663 [2024-11-28 09:51:40.347053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:01.663 [2024-11-28 09:51:40.347058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.663 [2024-11-28 09:51:40.347066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:01.663 [2024-11-28 09:51:40.347071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:01.663 [2024-11-28 09:51:40.347079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:01.663 [2024-11-28 09:51:40.347092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:01.663 [2024-11-28 09:51:40.347097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:01.663 [2024-11-28 09:51:40.347110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.663 [2024-11-28 09:51:40.347122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:01.663 [2024-11-28 09:51:40.347132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.663 [2024-11-28 09:51:40.347144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:01.663 [2024-11-28 09:51:40.347149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.663 [2024-11-28 09:51:40.347175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:01.663 [2024-11-28 09:51:40.347183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.663 [2024-11-28 09:51:40.347197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:01.663 [2024-11-28 09:51:40.347202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:01.663 [2024-11-28 09:51:40.347209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.663 [2024-11-28 09:51:40.347214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:01.663 [2024-11-28 09:51:40.347221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:01.663 [2024-11-28 09:51:40.347226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.663 [2024-11-28 09:51:40.347233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:01.664 [2024-11-28 09:51:40.347239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:01.664 [2024-11-28 09:51:40.347246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.664 [2024-11-28 09:51:40.347250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:01.664 [2024-11-28 09:51:40.347257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:01.664 [2024-11-28 09:51:40.347262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.664 [2024-11-28 09:51:40.347270] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:01.664 [2024-11-28 09:51:40.347277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:01.664 [2024-11-28 09:51:40.347284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.664 [2024-11-28 09:51:40.347289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.664 [2024-11-28 09:51:40.347300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:01.664 [2024-11-28 09:51:40.347306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:01.664 [2024-11-28 09:51:40.347313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:01.664 [2024-11-28 09:51:40.347318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:01.664 [2024-11-28 09:51:40.347324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:01.664 [2024-11-28 09:51:40.347330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:01.664 [2024-11-28 09:51:40.347341] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:01.664 [2024-11-28 09:51:40.347348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:01.664 [2024-11-28 09:51:40.347362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:01.664 [2024-11-28 09:51:40.347369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:01.664 [2024-11-28 09:51:40.347374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:01.664 [2024-11-28 09:51:40.347381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:01.664 [2024-11-28 09:51:40.347387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:01.664 [2024-11-28 09:51:40.347395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:01.664 [2024-11-28 09:51:40.347400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:01.664 [2024-11-28 09:51:40.347408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:01.664 [2024-11-28 09:51:40.347414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:01.664 [2024-11-28 09:51:40.347446] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:01.664 [2024-11-28 09:51:40.347452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:01.664 [2024-11-28 09:51:40.347468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:01.664 [2024-11-28 09:51:40.347475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:01.664 [2024-11-28 09:51:40.347480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:01.664 [2024-11-28 09:51:40.347488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.664 [2024-11-28 09:51:40.347495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:01.664 [2024-11-28 09:51:40.347502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:19:01.664 [2024-11-28 09:51:40.347508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.664 [2024-11-28 09:51:40.347547] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:01.664 [2024-11-28 09:51:40.347555] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:05.875 [2024-11-28 09:51:44.093013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.093070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:05.875 [2024-11-28 09:51:44.093087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3745.452 ms 00:19:05.875 [2024-11-28 09:51:44.093101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.116739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.116897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.875 [2024-11-28 09:51:44.116917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.455 ms 00:19:05.875 [2024-11-28 09:51:44.116924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.117024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.117033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:05.875 [2024-11-28 09:51:44.117044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:05.875 [2024-11-28 09:51:44.117050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.156137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.156178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.875 [2024-11-28 09:51:44.156191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.060 ms 00:19:05.875 [2024-11-28 09:51:44.156198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.156230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.156238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.875 [2024-11-28 09:51:44.156246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:05.875 [2024-11-28 09:51:44.156254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.156658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.156674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.875 [2024-11-28 09:51:44.156683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:19:05.875 [2024-11-28 09:51:44.156689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.156778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.156785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.875 [2024-11-28 09:51:44.156796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:05.875 [2024-11-28 09:51:44.156803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.168649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.168679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.875 [2024-11-28 09:51:44.168689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.831 ms 00:19:05.875 [2024-11-28 09:51:44.168701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.178601] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:05.875 [2024-11-28 09:51:44.184019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.184173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:05.875 [2024-11-28 09:51:44.184186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.266 ms 00:19:05.875 [2024-11-28 09:51:44.184194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.257329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.257364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:05.875 [2024-11-28 09:51:44.257373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.116 ms 00:19:05.875 [2024-11-28 09:51:44.257382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.257512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.257523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:05.875 [2024-11-28 09:51:44.257531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:05.875 [2024-11-28 09:51:44.257541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.275735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.275848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:05.875 [2024-11-28 09:51:44.275862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.164 ms 00:19:05.875 [2024-11-28 09:51:44.275870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.293653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.293680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:05.875 [2024-11-28 09:51:44.293689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.767 ms 00:19:05.875 [2024-11-28 09:51:44.293697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.294126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.294136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:05.875 [2024-11-28 09:51:44.294143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:19:05.875 [2024-11-28 09:51:44.294162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.358266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.358298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:05.875 [2024-11-28 09:51:44.358307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.080 ms 00:19:05.875 [2024-11-28 09:51:44.358316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.378236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.378265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:05.875 [2024-11-28 09:51:44.378276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.870 ms 00:19:05.875 [2024-11-28 09:51:44.378284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.875 [2024-11-28 09:51:44.396566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.875 [2024-11-28 09:51:44.396681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:05.876 [2024-11-28 09:51:44.396694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.256 ms 00:19:05.876 [2024-11-28 09:51:44.396701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.876 [2024-11-28 09:51:44.415157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.876 [2024-11-28 09:51:44.415186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:05.876 [2024-11-28 09:51:44.415194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.427 ms 00:19:05.876 [2024-11-28 09:51:44.415202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.876 [2024-11-28 09:51:44.415231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.876 [2024-11-28 09:51:44.415243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:05.876 [2024-11-28 09:51:44.415250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:05.876 [2024-11-28 09:51:44.415258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.876 [2024-11-28 09:51:44.415322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.876 [2024-11-28 09:51:44.415332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:05.876 [2024-11-28 09:51:44.415339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:05.876 [2024-11-28 09:51:44.415347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.876 [2024-11-28 09:51:44.416882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4082.985 ms, result 0 00:19:05.876 { 00:19:05.876 "name": "ftl0", 00:19:05.876 "uuid": "2289be54-f286-4717-97b8-6a2da3e9d76a" 00:19:05.876 } 00:19:05.876 09:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:05.876 09:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:05.876 09:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:05.876 09:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:05.876 [2024-11-28 09:51:44.728307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:05.876 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:05.876 Zero copy mechanism will not be used. 00:19:05.876 Running I/O for 4 seconds... 00:19:08.207 754.00 IOPS, 50.07 MiB/s [2024-11-28T09:51:48.031Z] 753.50 IOPS, 50.04 MiB/s [2024-11-28T09:51:48.977Z] 744.67 IOPS, 49.45 MiB/s [2024-11-28T09:51:48.977Z] 740.75 IOPS, 49.19 MiB/s 00:19:10.097 Latency(us) 00:19:10.097 [2024-11-28T09:51:48.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.097 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:10.097 ftl0 : 4.00 740.59 49.18 0.00 0.00 1428.59 478.92 8116.38 00:19:10.097 [2024-11-28T09:51:48.977Z] =================================================================================================================== 00:19:10.097 [2024-11-28T09:51:48.977Z] Total : 740.59 49.18 0.00 0.00 1428.59 478.92 8116.38 00:19:10.097 { 00:19:10.097 "results": [ 00:19:10.097 { 00:19:10.097 "job": "ftl0", 00:19:10.097 "core_mask": "0x1", 00:19:10.097 "workload": "randwrite", 00:19:10.097 "status": "finished", 00:19:10.097 "queue_depth": 1, 00:19:10.097 "io_size": 69632, 00:19:10.097 "runtime": 4.002239, 00:19:10.097 "iops": 740.5854572902817, 00:19:10.097 "mibps": 49.179503023182775, 00:19:10.097 "io_failed": 0, 00:19:10.097 "io_timeout": 0, 00:19:10.097 "avg_latency_us": 1428.585238243538, 00:19:10.097 "min_latency_us": 478.91692307692307, 00:19:10.097 "max_latency_us": 8116.381538461538 00:19:10.097 } 00:19:10.097 ], 00:19:10.097 "core_count": 1 00:19:10.097 } 00:19:10.097 [2024-11-28 09:51:48.736712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:10.097 09:51:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:10.097 [2024-11-28 09:51:48.845287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:10.097 Running I/O for 4 seconds... 00:19:11.997 7804.00 IOPS, 30.48 MiB/s [2024-11-28T09:51:52.268Z] 7031.00 IOPS, 27.46 MiB/s [2024-11-28T09:51:53.215Z] 6588.00 IOPS, 25.73 MiB/s [2024-11-28T09:51:53.215Z] 6087.75 IOPS, 23.78 MiB/s 00:19:14.335 Latency(us) 00:19:14.335 [2024-11-28T09:51:53.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.335 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:14.335 ftl0 : 4.03 6071.98 23.72 0.00 0.00 20997.33 374.94 47185.92 00:19:14.335 [2024-11-28T09:51:53.215Z] =================================================================================================================== 00:19:14.335 [2024-11-28T09:51:53.215Z] Total : 6071.98 23.72 0.00 0.00 20997.33 0.00 47185.92 00:19:14.335 [2024-11-28 09:51:52.885842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:14.335 { 00:19:14.335 "results": [ 00:19:14.335 { 00:19:14.335 "job": "ftl0", 00:19:14.335 "core_mask": "0x1", 00:19:14.335 "workload": "randwrite", 00:19:14.335 "status": "finished", 00:19:14.335 "queue_depth": 128, 00:19:14.335 "io_size": 4096, 00:19:14.335 "runtime": 4.031469, 00:19:14.335 "iops": 6071.980213664052, 00:19:14.335 "mibps": 23.718672709625203, 00:19:14.335 "io_failed": 0, 00:19:14.335 "io_timeout": 0, 00:19:14.335 "avg_latency_us": 20997.328660107407, 00:19:14.335 "min_latency_us": 374.94153846153847, 00:19:14.335 "max_latency_us": 47185.92 00:19:14.335 } 00:19:14.335 ], 00:19:14.335 "core_count": 1 00:19:14.335 } 00:19:14.335 09:51:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:14.335 [2024-11-28 09:51:53.003419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:14.335 Running I/O for 4 seconds... 00:19:16.222 4250.00 IOPS, 16.60 MiB/s [2024-11-28T09:51:56.045Z] 4296.00 IOPS, 16.78 MiB/s [2024-11-28T09:51:57.434Z] 4338.67 IOPS, 16.95 MiB/s [2024-11-28T09:51:57.434Z] 4480.50 IOPS, 17.50 MiB/s 00:19:18.554 Latency(us) 00:19:18.554 [2024-11-28T09:51:57.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.554 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:18.554 Verification LBA range: start 0x0 length 0x1400000 00:19:18.554 ftl0 : 4.02 4495.14 17.56 0.00 0.00 28389.17 434.81 44362.83 00:19:18.554 [2024-11-28T09:51:57.434Z] =================================================================================================================== 00:19:18.554 [2024-11-28T09:51:57.434Z] Total : 4495.14 17.56 0.00 0.00 28389.17 0.00 44362.83 00:19:18.554 [2024-11-28 09:51:57.034570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:18.554 { 00:19:18.554 "results": [ 00:19:18.554 { 00:19:18.554 "job": "ftl0", 00:19:18.554 "core_mask": "0x1", 00:19:18.554 "workload": "verify", 00:19:18.554 "status": "finished", 00:19:18.554 "verify_range": { 00:19:18.554 "start": 0, 00:19:18.554 "length": 20971520 00:19:18.554 }, 00:19:18.554 "queue_depth": 128, 00:19:18.554 "io_size": 4096, 00:19:18.554 "runtime": 4.015445, 00:19:18.554 "iops": 4495.143128594714, 00:19:18.554 "mibps": 17.5591528460731, 00:19:18.554 "io_failed": 0, 00:19:18.554 "io_timeout": 0, 00:19:18.554 "avg_latency_us": 28389.165832346047, 00:19:18.554 "min_latency_us": 434.80615384615385, 00:19:18.554 "max_latency_us": 44362.83076923077 00:19:18.554 } 00:19:18.554 ], 00:19:18.554 "core_count": 1 00:19:18.554 } 00:19:18.554 09:51:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:18.554 [2024-11-28 09:51:57.238315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.554 [2024-11-28 09:51:57.238344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:18.554 [2024-11-28 09:51:57.238353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:18.554 [2024-11-28 09:51:57.238362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.554 [2024-11-28 09:51:57.238378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:18.554 [2024-11-28 09:51:57.240576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.554 [2024-11-28 09:51:57.240596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:18.554 [2024-11-28 09:51:57.240607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:19:18.554 [2024-11-28 09:51:57.240614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.554 [2024-11-28 09:51:57.243208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.554 [2024-11-28 09:51:57.243230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:18.554 [2024-11-28 09:51:57.243245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:19:18.554 [2024-11-28 09:51:57.243251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.554 [2024-11-28 09:51:57.414012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.554 [2024-11-28 09:51:57.414038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:18.554 [2024-11-28 09:51:57.414050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.745 ms 00:19:18.554 [2024-11-28 09:51:57.414056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.554 [2024-11-28 09:51:57.418692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.554 [2024-11-28 09:51:57.418713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:18.554 [2024-11-28 09:51:57.418722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.610 ms 00:19:18.554 [2024-11-28 09:51:57.418731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.436828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.436850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:18.817 [2024-11-28 09:51:57.436860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.052 ms 00:19:18.817 [2024-11-28 09:51:57.436866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.450079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.450104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:18.817 [2024-11-28 09:51:57.450114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.185 ms 00:19:18.817 [2024-11-28 09:51:57.450121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.450234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.450244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:18.817 [2024-11-28 09:51:57.450255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:18.817 [2024-11-28 09:51:57.450261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.468716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.468737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:18.817 [2024-11-28 09:51:57.468747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.441 ms 00:19:18.817 [2024-11-28 09:51:57.468752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.486596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.486618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:18.817 [2024-11-28 09:51:57.486628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.817 ms 00:19:18.817 [2024-11-28 09:51:57.486634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.504368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.504389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:18.817 [2024-11-28 09:51:57.504399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.707 ms 00:19:18.817 [2024-11-28 09:51:57.504404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.521790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.817 [2024-11-28 09:51:57.521810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:18.817 [2024-11-28 09:51:57.521821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.333 ms 00:19:18.817 [2024-11-28 09:51:57.521827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.817 [2024-11-28 09:51:57.521853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:18.817 [2024-11-28 09:51:57.521865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:18.817 [2024-11-28 09:51:57.521940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.521988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:18.818 [2024-11-28 09:51:57.522572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:18.819 [2024-11-28 09:51:57.522585] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:18.819 [2024-11-28 09:51:57.522592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2289be54-f286-4717-97b8-6a2da3e9d76a 00:19:18.819 [2024-11-28 09:51:57.522601] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:18.819 [2024-11-28 09:51:57.522609] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:18.819 [2024-11-28 09:51:57.522614] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:18.819 [2024-11-28 09:51:57.522622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:18.819 [2024-11-28 09:51:57.522627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:18.819 [2024-11-28 09:51:57.522635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:18.819 [2024-11-28 09:51:57.522640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:18.819 [2024-11-28 09:51:57.522648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:18.819 [2024-11-28 09:51:57.522653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:18.819 [2024-11-28 09:51:57.522660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.819 [2024-11-28 09:51:57.522666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:18.819 [2024-11-28 09:51:57.522674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:19:18.819 [2024-11-28 09:51:57.522680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.532827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.819 [2024-11-28 09:51:57.532848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:18.819 [2024-11-28 09:51:57.532858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.115 ms 00:19:18.819 [2024-11-28 09:51:57.532864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.533167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.819 [2024-11-28 09:51:57.533175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:18.819 [2024-11-28 09:51:57.533184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:18.819 [2024-11-28 09:51:57.533189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.562106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.819 [2024-11-28 09:51:57.562130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.819 [2024-11-28 09:51:57.562142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.819 [2024-11-28 09:51:57.562148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.562204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.819 [2024-11-28 09:51:57.562210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.819 [2024-11-28 09:51:57.562218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.819 [2024-11-28 09:51:57.562224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.562278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.819 [2024-11-28 09:51:57.562285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.819 [2024-11-28 09:51:57.562293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.819 [2024-11-28 09:51:57.562299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.562313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.819 [2024-11-28 09:51:57.562319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.819 [2024-11-28 09:51:57.562326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.819 [2024-11-28 09:51:57.562333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.819 [2024-11-28 09:51:57.636505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:18.819 [2024-11-28 09:51:57.636541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.819 [2024-11-28 09:51:57.636556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:18.819 [2024-11-28 09:51:57.636565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.703831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.703869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.080 [2024-11-28 09:51:57.703882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.703890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.703991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.704001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.080 [2024-11-28 09:51:57.704011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.704019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.704066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.704076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.080 [2024-11-28 09:51:57.704086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.704093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.704208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.704222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.080 [2024-11-28 09:51:57.704234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.704242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.704275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.704283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:19.080 [2024-11-28 09:51:57.704293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.704300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.704337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.704349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.080 [2024-11-28 09:51:57.704359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.704372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.704418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.080 [2024-11-28 09:51:57.704428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.080 [2024-11-28 09:51:57.704437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.080 [2024-11-28 09:51:57.704446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.080 [2024-11-28 09:51:57.704585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.219 ms, result 0 00:19:19.080 true 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75933 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75933 ']' 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75933 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75933 00:19:19.080 killing process with pid 75933 00:19:19.080 Received shutdown signal, test time was about 4.000000 seconds 00:19:19.080 00:19:19.080 Latency(us) 00:19:19.080 [2024-11-28T09:51:57.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.080 [2024-11-28T09:51:57.960Z] =================================================================================================================== 00:19:19.080 [2024-11-28T09:51:57.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75933' 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75933 00:19:19.080 09:51:57 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75933 00:19:19.652 Remove shared memory files 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:19.652 09:51:58 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:19.914 00:19:19.914 real 0m21.957s 00:19:19.914 user 0m24.481s 00:19:19.914 sys 0m0.836s 00:19:19.914 09:51:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.914 ************************************ 00:19:19.914 END TEST ftl_bdevperf 00:19:19.914 ************************************ 00:19:19.914 09:51:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:19.914 09:51:58 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:19.914 09:51:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:19.914 09:51:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.914 09:51:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:19.914 ************************************ 00:19:19.914 START TEST ftl_trim 00:19:19.914 ************************************ 00:19:19.914 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:19.914 * Looking for test storage... 00:19:19.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.914 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:19.914 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:19.914 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:19.914 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:19.914 09:51:58 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.914 09:51:58 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.915 09:51:58 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.915 --rc genhtml_branch_coverage=1 00:19:19.915 --rc genhtml_function_coverage=1 00:19:19.915 --rc genhtml_legend=1 00:19:19.915 --rc geninfo_all_blocks=1 00:19:19.915 --rc geninfo_unexecuted_blocks=1 00:19:19.915 00:19:19.915 ' 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.915 --rc genhtml_branch_coverage=1 00:19:19.915 --rc genhtml_function_coverage=1 00:19:19.915 --rc genhtml_legend=1 00:19:19.915 --rc geninfo_all_blocks=1 00:19:19.915 --rc geninfo_unexecuted_blocks=1 00:19:19.915 00:19:19.915 ' 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.915 --rc genhtml_branch_coverage=1 00:19:19.915 --rc genhtml_function_coverage=1 00:19:19.915 --rc genhtml_legend=1 00:19:19.915 --rc geninfo_all_blocks=1 00:19:19.915 --rc geninfo_unexecuted_blocks=1 00:19:19.915 00:19:19.915 ' 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.915 --rc genhtml_branch_coverage=1 00:19:19.915 --rc genhtml_function_coverage=1 00:19:19.915 --rc genhtml_legend=1 00:19:19.915 --rc geninfo_all_blocks=1 00:19:19.915 --rc geninfo_unexecuted_blocks=1 00:19:19.915 00:19:19.915 ' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76279 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76279 00:19:19.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76279 ']' 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.915 09:51:58 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.915 09:51:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:20.177 [2024-11-28 09:51:58.855078] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:20.177 [2024-11-28 09:51:58.855246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76279 ] 00:19:20.177 [2024-11-28 09:51:59.019645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.543 [2024-11-28 09:51:59.163569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.543 [2024-11-28 09:51:59.163911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.543 [2024-11-28 09:51:59.163971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.150 09:51:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.150 09:51:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:21.150 09:51:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:21.150 09:51:59 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:21.150 09:51:59 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:21.150 09:51:59 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:21.150 09:51:59 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:21.150 09:51:59 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:21.410 09:52:00 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:21.410 09:52:00 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:21.410 09:52:00 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:21.410 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:21.410 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.410 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.410 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.410 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:21.671 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.671 { 00:19:21.671 "name": "nvme0n1", 00:19:21.671 "aliases": [ 00:19:21.671 "60bc29ad-b7e8-497e-b932-96e86c597b32" 00:19:21.671 ], 00:19:21.671 "product_name": "NVMe disk", 00:19:21.671 "block_size": 4096, 00:19:21.671 "num_blocks": 1310720, 00:19:21.671 "uuid": "60bc29ad-b7e8-497e-b932-96e86c597b32", 00:19:21.671 "numa_id": -1, 00:19:21.671 "assigned_rate_limits": { 00:19:21.671 "rw_ios_per_sec": 0, 00:19:21.671 "rw_mbytes_per_sec": 0, 00:19:21.671 "r_mbytes_per_sec": 0, 00:19:21.671 "w_mbytes_per_sec": 0 00:19:21.671 }, 00:19:21.671 "claimed": true, 00:19:21.671 "claim_type": "read_many_write_one", 00:19:21.671 "zoned": false, 00:19:21.671 "supported_io_types": { 00:19:21.671 "read": true, 00:19:21.671 "write": true, 00:19:21.671 "unmap": true, 00:19:21.671 "flush": true, 00:19:21.671 "reset": true, 00:19:21.671 "nvme_admin": true, 00:19:21.671 "nvme_io": true, 00:19:21.671 "nvme_io_md": false, 00:19:21.671 "write_zeroes": true, 00:19:21.671 "zcopy": false, 00:19:21.671 "get_zone_info": false, 00:19:21.671 "zone_management": false, 00:19:21.671 "zone_append": false, 00:19:21.671 "compare": true, 00:19:21.671 "compare_and_write": false, 00:19:21.671 "abort": true, 00:19:21.671 "seek_hole": false, 00:19:21.671 "seek_data": false, 00:19:21.671 "copy": true, 00:19:21.671 "nvme_iov_md": false 00:19:21.671 }, 00:19:21.671 "driver_specific": { 00:19:21.671 "nvme": [ 00:19:21.671 { 00:19:21.671 "pci_address": "0000:00:11.0", 00:19:21.671 "trid": { 00:19:21.671 "trtype": "PCIe", 00:19:21.671 "traddr": "0000:00:11.0" 00:19:21.671 }, 00:19:21.671 "ctrlr_data": { 00:19:21.671 "cntlid": 0, 00:19:21.671 "vendor_id": "0x1b36", 00:19:21.671 "model_number": "QEMU NVMe Ctrl", 00:19:21.671 "serial_number": "12341", 00:19:21.671 "firmware_revision": "8.0.0", 00:19:21.671 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:21.671 "oacs": { 00:19:21.671 "security": 0, 00:19:21.671 "format": 1, 00:19:21.671 "firmware": 0, 00:19:21.671 "ns_manage": 1 00:19:21.671 }, 00:19:21.671 "multi_ctrlr": false, 00:19:21.671 "ana_reporting": false 00:19:21.671 }, 00:19:21.671 "vs": { 00:19:21.671 "nvme_version": "1.4" 00:19:21.671 }, 00:19:21.671 "ns_data": { 00:19:21.671 "id": 1, 00:19:21.671 "can_share": false 00:19:21.671 } 00:19:21.671 } 00:19:21.671 ], 00:19:21.671 "mp_policy": "active_passive" 00:19:21.671 } 00:19:21.671 } 00:19:21.671 ]' 00:19:21.671 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.671 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.671 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.671 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:21.671 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:21.672 09:52:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:21.672 09:52:00 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:21.672 09:52:00 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:21.672 09:52:00 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:21.672 09:52:00 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:21.672 09:52:00 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:21.932 09:52:00 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=be0b3982-a589-40b1-aeb5-4fd55a76b974 00:19:21.932 09:52:00 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:21.932 09:52:00 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be0b3982-a589-40b1-aeb5-4fd55a76b974 00:19:22.194 09:52:00 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:22.454 09:52:01 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f768c68a-ba69-4561-bc6d-4e39a54e3184 00:19:22.454 09:52:01 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f768c68a-ba69-4561-bc6d-4e39a54e3184 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:22.715 09:52:01 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:22.715 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:22.715 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.715 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.715 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.715 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:22.715 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.715 { 00:19:22.715 "name": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:22.715 "aliases": [ 00:19:22.715 "lvs/nvme0n1p0" 00:19:22.715 ], 00:19:22.715 "product_name": "Logical Volume", 00:19:22.715 "block_size": 4096, 00:19:22.715 "num_blocks": 26476544, 00:19:22.715 "uuid": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:22.715 "assigned_rate_limits": { 00:19:22.715 "rw_ios_per_sec": 0, 00:19:22.715 "rw_mbytes_per_sec": 0, 00:19:22.715 "r_mbytes_per_sec": 0, 00:19:22.715 "w_mbytes_per_sec": 0 00:19:22.715 }, 00:19:22.715 "claimed": false, 00:19:22.716 "zoned": false, 00:19:22.716 "supported_io_types": { 00:19:22.716 "read": true, 00:19:22.716 "write": true, 00:19:22.716 "unmap": true, 00:19:22.716 "flush": false, 00:19:22.716 "reset": true, 00:19:22.716 "nvme_admin": false, 00:19:22.716 "nvme_io": false, 00:19:22.716 "nvme_io_md": false, 00:19:22.716 "write_zeroes": true, 00:19:22.716 "zcopy": false, 00:19:22.716 "get_zone_info": false, 00:19:22.716 "zone_management": false, 00:19:22.716 "zone_append": false, 00:19:22.716 "compare": false, 00:19:22.716 "compare_and_write": false, 00:19:22.716 "abort": false, 00:19:22.716 "seek_hole": true, 00:19:22.716 "seek_data": true, 00:19:22.716 "copy": false, 00:19:22.716 "nvme_iov_md": false 00:19:22.716 }, 00:19:22.716 "driver_specific": { 00:19:22.716 "lvol": { 00:19:22.716 "lvol_store_uuid": "f768c68a-ba69-4561-bc6d-4e39a54e3184", 00:19:22.716 "base_bdev": "nvme0n1", 00:19:22.716 "thin_provision": true, 00:19:22.716 "num_allocated_clusters": 0, 00:19:22.716 "snapshot": false, 00:19:22.716 "clone": false, 00:19:22.716 "esnap_clone": false 00:19:22.716 } 00:19:22.716 } 00:19:22.716 } 00:19:22.716 ]' 00:19:22.716 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.716 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.716 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.975 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.975 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.975 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.975 09:52:01 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:22.975 09:52:01 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:22.975 09:52:01 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:23.234 09:52:01 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:23.234 09:52:01 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:23.234 09:52:01 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:23.234 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:23.234 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.234 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.234 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.234 09:52:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:23.234 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.234 { 00:19:23.234 "name": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:23.234 "aliases": [ 00:19:23.234 "lvs/nvme0n1p0" 00:19:23.234 ], 00:19:23.234 "product_name": "Logical Volume", 00:19:23.234 "block_size": 4096, 00:19:23.234 "num_blocks": 26476544, 00:19:23.234 "uuid": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:23.234 "assigned_rate_limits": { 00:19:23.234 "rw_ios_per_sec": 0, 00:19:23.234 "rw_mbytes_per_sec": 0, 00:19:23.234 "r_mbytes_per_sec": 0, 00:19:23.234 "w_mbytes_per_sec": 0 00:19:23.234 }, 00:19:23.234 "claimed": false, 00:19:23.234 "zoned": false, 00:19:23.234 "supported_io_types": { 00:19:23.234 "read": true, 00:19:23.234 "write": true, 00:19:23.234 "unmap": true, 00:19:23.234 "flush": false, 00:19:23.234 "reset": true, 00:19:23.234 "nvme_admin": false, 00:19:23.234 "nvme_io": false, 00:19:23.234 "nvme_io_md": false, 00:19:23.234 "write_zeroes": true, 00:19:23.234 "zcopy": false, 00:19:23.234 "get_zone_info": false, 00:19:23.234 "zone_management": false, 00:19:23.234 "zone_append": false, 00:19:23.234 "compare": false, 00:19:23.234 "compare_and_write": false, 00:19:23.234 "abort": false, 00:19:23.234 "seek_hole": true, 00:19:23.234 "seek_data": true, 00:19:23.234 "copy": false, 00:19:23.234 "nvme_iov_md": false 00:19:23.234 }, 00:19:23.234 "driver_specific": { 00:19:23.234 "lvol": { 00:19:23.234 "lvol_store_uuid": "f768c68a-ba69-4561-bc6d-4e39a54e3184", 00:19:23.234 "base_bdev": "nvme0n1", 00:19:23.234 "thin_provision": true, 00:19:23.234 "num_allocated_clusters": 0, 00:19:23.234 "snapshot": false, 00:19:23.234 "clone": false, 00:19:23.234 "esnap_clone": false 00:19:23.234 } 00:19:23.234 } 00:19:23.234 } 00:19:23.234 ]' 00:19:23.234 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.234 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.234 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.493 09:52:02 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:23.493 09:52:02 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:23.493 09:52:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:23.493 09:52:02 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:23.493 09:52:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.493 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.752 { 00:19:23.752 "name": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:23.752 "aliases": [ 00:19:23.752 "lvs/nvme0n1p0" 00:19:23.752 ], 00:19:23.752 "product_name": "Logical Volume", 00:19:23.752 "block_size": 4096, 00:19:23.752 "num_blocks": 26476544, 00:19:23.752 "uuid": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:23.752 "assigned_rate_limits": { 00:19:23.752 "rw_ios_per_sec": 0, 00:19:23.752 "rw_mbytes_per_sec": 0, 00:19:23.752 "r_mbytes_per_sec": 0, 00:19:23.752 "w_mbytes_per_sec": 0 00:19:23.752 }, 00:19:23.752 "claimed": false, 00:19:23.752 "zoned": false, 00:19:23.752 "supported_io_types": { 00:19:23.752 "read": true, 00:19:23.752 "write": true, 00:19:23.752 "unmap": true, 00:19:23.752 "flush": false, 00:19:23.752 "reset": true, 00:19:23.752 "nvme_admin": false, 00:19:23.752 "nvme_io": false, 00:19:23.752 "nvme_io_md": false, 00:19:23.752 "write_zeroes": true, 00:19:23.752 "zcopy": false, 00:19:23.752 "get_zone_info": false, 00:19:23.752 "zone_management": false, 00:19:23.752 "zone_append": false, 00:19:23.752 "compare": false, 00:19:23.752 "compare_and_write": false, 00:19:23.752 "abort": false, 00:19:23.752 "seek_hole": true, 00:19:23.752 "seek_data": true, 00:19:23.752 "copy": false, 00:19:23.752 "nvme_iov_md": false 00:19:23.752 }, 00:19:23.752 "driver_specific": { 00:19:23.752 "lvol": { 00:19:23.752 "lvol_store_uuid": "f768c68a-ba69-4561-bc6d-4e39a54e3184", 00:19:23.752 "base_bdev": "nvme0n1", 00:19:23.752 "thin_provision": true, 00:19:23.752 "num_allocated_clusters": 0, 00:19:23.752 "snapshot": false, 00:19:23.752 "clone": false, 00:19:23.752 "esnap_clone": false 00:19:23.752 } 00:19:23.752 } 00:19:23.752 } 00:19:23.752 ]' 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.752 09:52:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.752 09:52:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:23.752 09:52:02 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 20e4bd00-9d7d-48bb-aca2-6cffc946cab2 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:24.012 [2024-11-28 09:52:02.792384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.792428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.012 [2024-11-28 09:52:02.792442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.012 [2024-11-28 09:52:02.792449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.794757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.794787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.012 [2024-11-28 09:52:02.794798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:19:24.012 [2024-11-28 09:52:02.794804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.794875] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.012 [2024-11-28 09:52:02.795403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.012 [2024-11-28 09:52:02.795424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.795431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.012 [2024-11-28 09:52:02.795439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:24.012 [2024-11-28 09:52:02.795446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.795534] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:19:24.012 [2024-11-28 09:52:02.796775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.796806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:24.012 [2024-11-28 09:52:02.796815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:24.012 [2024-11-28 09:52:02.796824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.803558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.803586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.012 [2024-11-28 09:52:02.803594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:19:24.012 [2024-11-28 09:52:02.803605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.803711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.803722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.012 [2024-11-28 09:52:02.803729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:24.012 [2024-11-28 09:52:02.803739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.803767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.803775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.012 [2024-11-28 09:52:02.803781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.012 [2024-11-28 09:52:02.803790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.803816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.012 [2024-11-28 09:52:02.807015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.807040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.012 [2024-11-28 09:52:02.807051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:19:24.012 [2024-11-28 09:52:02.807057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.807103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.807122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.012 [2024-11-28 09:52:02.807130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:24.012 [2024-11-28 09:52:02.807136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.807172] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:24.012 [2024-11-28 09:52:02.807283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.012 [2024-11-28 09:52:02.807297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.012 [2024-11-28 09:52:02.807305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:24.012 [2024-11-28 09:52:02.807315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.012 [2024-11-28 09:52:02.807323] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.012 [2024-11-28 09:52:02.807330] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.012 [2024-11-28 09:52:02.807336] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.012 [2024-11-28 09:52:02.807345] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.012 [2024-11-28 09:52:02.807351] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.012 [2024-11-28 09:52:02.807359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.807365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.012 [2024-11-28 09:52:02.807373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:19:24.012 [2024-11-28 09:52:02.807379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.807457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.012 [2024-11-28 09:52:02.807465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.012 [2024-11-28 09:52:02.807472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:24.012 [2024-11-28 09:52:02.807477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.012 [2024-11-28 09:52:02.807588] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.012 [2024-11-28 09:52:02.807597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.012 [2024-11-28 09:52:02.807605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.012 [2024-11-28 09:52:02.807611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.012 [2024-11-28 09:52:02.807618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.012 [2024-11-28 09:52:02.807623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.012 [2024-11-28 09:52:02.807630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.012 [2024-11-28 09:52:02.807635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.012 [2024-11-28 09:52:02.807642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.012 [2024-11-28 09:52:02.807647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.012 [2024-11-28 09:52:02.807654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.012 [2024-11-28 09:52:02.807659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.013 [2024-11-28 09:52:02.807665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.013 [2024-11-28 09:52:02.807670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.013 [2024-11-28 09:52:02.807676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.013 [2024-11-28 09:52:02.807682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.013 [2024-11-28 09:52:02.807696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.013 [2024-11-28 09:52:02.807715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.013 [2024-11-28 09:52:02.807732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.013 [2024-11-28 09:52:02.807753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.013 [2024-11-28 09:52:02.807771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.013 [2024-11-28 09:52:02.807791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.013 [2024-11-28 09:52:02.807802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.013 [2024-11-28 09:52:02.807807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.013 [2024-11-28 09:52:02.807813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.013 [2024-11-28 09:52:02.807818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.013 [2024-11-28 09:52:02.807825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.013 [2024-11-28 09:52:02.807830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.013 [2024-11-28 09:52:02.807842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.013 [2024-11-28 09:52:02.807849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807854] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.013 [2024-11-28 09:52:02.807861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.013 [2024-11-28 09:52:02.807867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.013 [2024-11-28 09:52:02.807879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.013 [2024-11-28 09:52:02.807889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.013 [2024-11-28 09:52:02.807894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.013 [2024-11-28 09:52:02.807901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.013 [2024-11-28 09:52:02.807905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.013 [2024-11-28 09:52:02.807912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.013 [2024-11-28 09:52:02.807920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.013 [2024-11-28 09:52:02.807929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.807939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.013 [2024-11-28 09:52:02.807947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.013 [2024-11-28 09:52:02.807953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.013 [2024-11-28 09:52:02.807961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.013 [2024-11-28 09:52:02.807966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.013 [2024-11-28 09:52:02.807973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.013 [2024-11-28 09:52:02.807979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.013 [2024-11-28 09:52:02.807986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.013 [2024-11-28 09:52:02.807991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.013 [2024-11-28 09:52:02.808000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.808006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.808013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.808018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.808027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.013 [2024-11-28 09:52:02.808032] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.013 [2024-11-28 09:52:02.808040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.808046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.013 [2024-11-28 09:52:02.808053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.013 [2024-11-28 09:52:02.808058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.013 [2024-11-28 09:52:02.808066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.013 [2024-11-28 09:52:02.808072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.013 [2024-11-28 09:52:02.808080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.013 [2024-11-28 09:52:02.808086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:24.013 [2024-11-28 09:52:02.808093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.013 [2024-11-28 09:52:02.808190] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:24.013 [2024-11-28 09:52:02.808203] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:26.541 [2024-11-28 09:52:05.199810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.541 [2024-11-28 09:52:05.199856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:26.541 [2024-11-28 09:52:05.199869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2391.611 ms 00:19:26.541 [2024-11-28 09:52:05.199881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.541 [2024-11-28 09:52:05.227982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.541 [2024-11-28 09:52:05.228023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.541 [2024-11-28 09:52:05.228034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.867 ms 00:19:26.541 [2024-11-28 09:52:05.228045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.541 [2024-11-28 09:52:05.228186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.541 [2024-11-28 09:52:05.228200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.541 [2024-11-28 09:52:05.228223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:26.541 [2024-11-28 09:52:05.228237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.541 [2024-11-28 09:52:05.274675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.274716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.542 [2024-11-28 09:52:05.274728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.401 ms 00:19:26.542 [2024-11-28 09:52:05.274739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.274835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.274848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.542 [2024-11-28 09:52:05.274858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:26.542 [2024-11-28 09:52:05.274867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.275316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.275341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.542 [2024-11-28 09:52:05.275350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:19:26.542 [2024-11-28 09:52:05.275360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.275473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.275484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.542 [2024-11-28 09:52:05.275506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:26.542 [2024-11-28 09:52:05.275517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.291417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.291451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.542 [2024-11-28 09:52:05.291461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.864 ms 00:19:26.542 [2024-11-28 09:52:05.291471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.303743] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:26.542 [2024-11-28 09:52:05.320857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.320889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.542 [2024-11-28 09:52:05.320901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.278 ms 00:19:26.542 [2024-11-28 09:52:05.320909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.384575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.384623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:26.542 [2024-11-28 09:52:05.384637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.594 ms 00:19:26.542 [2024-11-28 09:52:05.384646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.384854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.384866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.542 [2024-11-28 09:52:05.384879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:26.542 [2024-11-28 09:52:05.384887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.542 [2024-11-28 09:52:05.408391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.542 [2024-11-28 09:52:05.408423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:26.542 [2024-11-28 09:52:05.408436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:19:26.542 [2024-11-28 09:52:05.408446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.430643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.430672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:26.800 [2024-11-28 09:52:05.430685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.150 ms 00:19:26.800 [2024-11-28 09:52:05.430693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.431425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.431449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.800 [2024-11-28 09:52:05.431460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:19:26.800 [2024-11-28 09:52:05.431467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.498219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.498254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:26.800 [2024-11-28 09:52:05.498269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.714 ms 00:19:26.800 [2024-11-28 09:52:05.498277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.522878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.522909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:26.800 [2024-11-28 09:52:05.522922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.496 ms 00:19:26.800 [2024-11-28 09:52:05.522929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.545952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.545983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:26.800 [2024-11-28 09:52:05.545996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:19:26.800 [2024-11-28 09:52:05.546003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.569118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.569174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.800 [2024-11-28 09:52:05.569187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.041 ms 00:19:26.800 [2024-11-28 09:52:05.569195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.569259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.569269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.800 [2024-11-28 09:52:05.569282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:26.800 [2024-11-28 09:52:05.569289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.569367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.800 [2024-11-28 09:52:05.569376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.800 [2024-11-28 09:52:05.569386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:26.800 [2024-11-28 09:52:05.569393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.800 [2024-11-28 09:52:05.570362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.800 [2024-11-28 09:52:05.573336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2777.664 ms, result 0 00:19:26.800 [2024-11-28 09:52:05.574144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.800 { 00:19:26.800 "name": "ftl0", 00:19:26.800 "uuid": "1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15" 00:19:26.800 } 00:19:26.800 09:52:05 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:26.800 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:26.800 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.800 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:26.800 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.800 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.800 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:27.059 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:27.317 [ 00:19:27.317 { 00:19:27.317 "name": "ftl0", 00:19:27.317 "aliases": [ 00:19:27.317 "1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15" 00:19:27.317 ], 00:19:27.317 "product_name": "FTL disk", 00:19:27.317 "block_size": 4096, 00:19:27.317 "num_blocks": 23592960, 00:19:27.317 "uuid": "1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15", 00:19:27.317 "assigned_rate_limits": { 00:19:27.317 "rw_ios_per_sec": 0, 00:19:27.317 "rw_mbytes_per_sec": 0, 00:19:27.317 "r_mbytes_per_sec": 0, 00:19:27.317 "w_mbytes_per_sec": 0 00:19:27.317 }, 00:19:27.317 "claimed": false, 00:19:27.317 "zoned": false, 00:19:27.317 "supported_io_types": { 00:19:27.317 "read": true, 00:19:27.317 "write": true, 00:19:27.317 "unmap": true, 00:19:27.317 "flush": true, 00:19:27.317 "reset": false, 00:19:27.317 "nvme_admin": false, 00:19:27.317 "nvme_io": false, 00:19:27.317 "nvme_io_md": false, 00:19:27.317 "write_zeroes": true, 00:19:27.317 "zcopy": false, 00:19:27.317 "get_zone_info": false, 00:19:27.317 "zone_management": false, 00:19:27.317 "zone_append": false, 00:19:27.317 "compare": false, 00:19:27.317 "compare_and_write": false, 00:19:27.317 "abort": false, 00:19:27.317 "seek_hole": false, 00:19:27.317 "seek_data": false, 00:19:27.317 "copy": false, 00:19:27.317 "nvme_iov_md": false 00:19:27.317 }, 00:19:27.317 "driver_specific": { 00:19:27.317 "ftl": { 00:19:27.317 "base_bdev": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:27.317 "cache": "nvc0n1p0" 00:19:27.317 } 00:19:27.317 } 00:19:27.317 } 00:19:27.317 ] 00:19:27.317 09:52:05 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:27.317 09:52:05 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:27.317 09:52:05 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:27.575 09:52:06 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:27.575 09:52:06 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:27.575 09:52:06 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:27.575 { 00:19:27.575 "name": "ftl0", 00:19:27.575 "aliases": [ 00:19:27.575 "1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15" 00:19:27.575 ], 00:19:27.575 "product_name": "FTL disk", 00:19:27.575 "block_size": 4096, 00:19:27.575 "num_blocks": 23592960, 00:19:27.575 "uuid": "1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15", 00:19:27.575 "assigned_rate_limits": { 00:19:27.575 "rw_ios_per_sec": 0, 00:19:27.575 "rw_mbytes_per_sec": 0, 00:19:27.575 "r_mbytes_per_sec": 0, 00:19:27.575 "w_mbytes_per_sec": 0 00:19:27.575 }, 00:19:27.575 "claimed": false, 00:19:27.575 "zoned": false, 00:19:27.575 "supported_io_types": { 00:19:27.575 "read": true, 00:19:27.575 "write": true, 00:19:27.575 "unmap": true, 00:19:27.575 "flush": true, 00:19:27.575 "reset": false, 00:19:27.575 "nvme_admin": false, 00:19:27.575 "nvme_io": false, 00:19:27.575 "nvme_io_md": false, 00:19:27.575 "write_zeroes": true, 00:19:27.575 "zcopy": false, 00:19:27.575 "get_zone_info": false, 00:19:27.575 "zone_management": false, 00:19:27.575 "zone_append": false, 00:19:27.575 "compare": false, 00:19:27.575 "compare_and_write": false, 00:19:27.575 "abort": false, 00:19:27.575 "seek_hole": false, 00:19:27.575 "seek_data": false, 00:19:27.575 "copy": false, 00:19:27.575 "nvme_iov_md": false 00:19:27.575 }, 00:19:27.575 "driver_specific": { 00:19:27.575 "ftl": { 00:19:27.575 "base_bdev": "20e4bd00-9d7d-48bb-aca2-6cffc946cab2", 00:19:27.575 "cache": "nvc0n1p0" 00:19:27.575 } 00:19:27.575 } 00:19:27.575 } 00:19:27.575 ]' 00:19:27.575 09:52:06 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:27.575 09:52:06 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:27.575 09:52:06 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:27.836 [2024-11-28 09:52:06.597650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.597684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.836 [2024-11-28 09:52:06.597694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:27.836 [2024-11-28 09:52:06.597703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.597738] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:27.836 [2024-11-28 09:52:06.599943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.599967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.836 [2024-11-28 09:52:06.599979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:19:27.836 [2024-11-28 09:52:06.599986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.600470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.600490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.836 [2024-11-28 09:52:06.600499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:19:27.836 [2024-11-28 09:52:06.600505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.603258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.603276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.836 [2024-11-28 09:52:06.603286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:19:27.836 [2024-11-28 09:52:06.603293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.608542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.608564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.836 [2024-11-28 09:52:06.608573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.213 ms 00:19:27.836 [2024-11-28 09:52:06.608579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.626180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.626205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.836 [2024-11-28 09:52:06.626217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.528 ms 00:19:27.836 [2024-11-28 09:52:06.626224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.638149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.638188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.836 [2024-11-28 09:52:06.638201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.872 ms 00:19:27.836 [2024-11-28 09:52:06.638207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.638371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.638380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.836 [2024-11-28 09:52:06.638389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:27.836 [2024-11-28 09:52:06.638395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.655951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.655975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:27.836 [2024-11-28 09:52:06.655985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.529 ms 00:19:27.836 [2024-11-28 09:52:06.655990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.673374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.673503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:27.836 [2024-11-28 09:52:06.673521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.319 ms 00:19:27.836 [2024-11-28 09:52:06.673526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.690701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.690725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.836 [2024-11-28 09:52:06.690735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.128 ms 00:19:27.836 [2024-11-28 09:52:06.690741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.707548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.836 [2024-11-28 09:52:06.707572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.836 [2024-11-28 09:52:06.707581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.715 ms 00:19:27.836 [2024-11-28 09:52:06.707587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.836 [2024-11-28 09:52:06.707634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.836 [2024-11-28 09:52:06.707646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.836 [2024-11-28 09:52:06.707655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.836 [2024-11-28 09:52:06.707661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.836 [2024-11-28 09:52:06.707668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.836 [2024-11-28 09:52:06.707674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.707995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.837 [2024-11-28 09:52:06.708228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.838 [2024-11-28 09:52:06.708357] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.838 [2024-11-28 09:52:06.708366] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:19:27.838 [2024-11-28 09:52:06.708372] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.838 [2024-11-28 09:52:06.708379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.838 [2024-11-28 09:52:06.708387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.838 [2024-11-28 09:52:06.708394] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.838 [2024-11-28 09:52:06.708399] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.838 [2024-11-28 09:52:06.708407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.838 [2024-11-28 09:52:06.708412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.838 [2024-11-28 09:52:06.708418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.838 [2024-11-28 09:52:06.708423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.838 [2024-11-28 09:52:06.708431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.838 [2024-11-28 09:52:06.708437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.838 [2024-11-28 09:52:06.708446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:19:27.838 [2024-11-28 09:52:06.708452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.718182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.098 [2024-11-28 09:52:06.718205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.098 [2024-11-28 09:52:06.718215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.706 ms 00:19:28.098 [2024-11-28 09:52:06.718221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.718526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.098 [2024-11-28 09:52:06.718535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.098 [2024-11-28 09:52:06.718543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:28.098 [2024-11-28 09:52:06.718550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.755376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.755405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.098 [2024-11-28 09:52:06.755415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.755421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.755504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.755512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.098 [2024-11-28 09:52:06.755520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.755527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.755580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.755590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.098 [2024-11-28 09:52:06.755601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.755607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.755630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.755637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.098 [2024-11-28 09:52:06.755645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.755651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.822072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.822107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.098 [2024-11-28 09:52:06.822118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.822125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.873607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.873639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.098 [2024-11-28 09:52:06.873652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.873658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.873741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.873749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.098 [2024-11-28 09:52:06.873762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.873769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.873819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.873826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.098 [2024-11-28 09:52:06.873834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.873840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.873934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.873942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.098 [2024-11-28 09:52:06.873951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.873959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.874001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.874009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.098 [2024-11-28 09:52:06.874017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.874022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.874075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.874082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.098 [2024-11-28 09:52:06.874092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.098 [2024-11-28 09:52:06.874100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.098 [2024-11-28 09:52:06.874164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.098 [2024-11-28 09:52:06.874173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.098 [2024-11-28 09:52:06.874181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.099 [2024-11-28 09:52:06.874188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.099 [2024-11-28 09:52:06.874367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.697 ms, result 0 00:19:28.099 true 00:19:28.099 09:52:06 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76279 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76279 ']' 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76279 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76279 00:19:28.099 killing process with pid 76279 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76279' 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76279 00:19:28.099 09:52:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76279 00:19:34.663 09:52:12 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:34.922 65536+0 records in 00:19:34.922 65536+0 records out 00:19:34.922 268435456 bytes (268 MB, 256 MiB) copied, 1.06577 s, 252 MB/s 00:19:34.922 09:52:13 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.922 [2024-11-28 09:52:13.781141] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:34.922 [2024-11-28 09:52:13.781247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76462 ] 00:19:35.182 [2024-11-28 09:52:13.928249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.182 [2024-11-28 09:52:14.012605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.441 [2024-11-28 09:52:14.244936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.441 [2024-11-28 09:52:14.244995] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.702 [2024-11-28 09:52:14.398543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.398582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.702 [2024-11-28 09:52:14.398593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.702 [2024-11-28 09:52:14.398600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.400781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.400950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.702 [2024-11-28 09:52:14.400964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:19:35.702 [2024-11-28 09:52:14.400971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.401031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.702 [2024-11-28 09:52:14.401593] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.702 [2024-11-28 09:52:14.401607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.401614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.702 [2024-11-28 09:52:14.401621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:19:35.702 [2024-11-28 09:52:14.401627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.402909] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.702 [2024-11-28 09:52:14.413439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.413467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.702 [2024-11-28 09:52:14.413476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.532 ms 00:19:35.702 [2024-11-28 09:52:14.413482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.413554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.413564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.702 [2024-11-28 09:52:14.413570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:35.702 [2024-11-28 09:52:14.413576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.419775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.419907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.702 [2024-11-28 09:52:14.419919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.168 ms 00:19:35.702 [2024-11-28 09:52:14.419925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.420000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.420008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.702 [2024-11-28 09:52:14.420015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:35.702 [2024-11-28 09:52:14.420021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.420040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.420046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.702 [2024-11-28 09:52:14.420053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.702 [2024-11-28 09:52:14.420058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.420077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.702 [2024-11-28 09:52:14.423103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.423212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.702 [2024-11-28 09:52:14.423225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:19:35.702 [2024-11-28 09:52:14.423232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.423263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.423270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.702 [2024-11-28 09:52:14.423276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.702 [2024-11-28 09:52:14.423282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.423300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.702 [2024-11-28 09:52:14.423316] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.702 [2024-11-28 09:52:14.423345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.702 [2024-11-28 09:52:14.423357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.702 [2024-11-28 09:52:14.423438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.702 [2024-11-28 09:52:14.423447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.702 [2024-11-28 09:52:14.423455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.702 [2024-11-28 09:52:14.423466] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.702 [2024-11-28 09:52:14.423474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.702 [2024-11-28 09:52:14.423480] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.702 [2024-11-28 09:52:14.423486] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.702 [2024-11-28 09:52:14.423492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.702 [2024-11-28 09:52:14.423497] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.702 [2024-11-28 09:52:14.423503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.423509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.702 [2024-11-28 09:52:14.423516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:35.702 [2024-11-28 09:52:14.423521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.423600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.702 [2024-11-28 09:52:14.423609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.702 [2024-11-28 09:52:14.423616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:35.702 [2024-11-28 09:52:14.423621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.702 [2024-11-28 09:52:14.423702] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.702 [2024-11-28 09:52:14.423711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.702 [2024-11-28 09:52:14.423719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.702 [2024-11-28 09:52:14.423725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.702 [2024-11-28 09:52:14.423731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.702 [2024-11-28 09:52:14.423737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.702 [2024-11-28 09:52:14.423743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.702 [2024-11-28 09:52:14.423750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.702 [2024-11-28 09:52:14.423757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.702 [2024-11-28 09:52:14.423763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.702 [2024-11-28 09:52:14.423768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.702 [2024-11-28 09:52:14.423779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.702 [2024-11-28 09:52:14.423786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.702 [2024-11-28 09:52:14.423792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.702 [2024-11-28 09:52:14.423797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.703 [2024-11-28 09:52:14.423802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.703 [2024-11-28 09:52:14.423813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.703 [2024-11-28 09:52:14.423818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.703 [2024-11-28 09:52:14.423829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.703 [2024-11-28 09:52:14.423839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.703 [2024-11-28 09:52:14.423844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.703 [2024-11-28 09:52:14.423854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.703 [2024-11-28 09:52:14.423859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.703 [2024-11-28 09:52:14.423868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.703 [2024-11-28 09:52:14.423873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.703 [2024-11-28 09:52:14.423883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.703 [2024-11-28 09:52:14.423888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.703 [2024-11-28 09:52:14.423898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.703 [2024-11-28 09:52:14.423903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.703 [2024-11-28 09:52:14.423908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.703 [2024-11-28 09:52:14.423914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.703 [2024-11-28 09:52:14.423919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.703 [2024-11-28 09:52:14.423924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.703 [2024-11-28 09:52:14.423934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.703 [2024-11-28 09:52:14.423939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.703 [2024-11-28 09:52:14.423953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.703 [2024-11-28 09:52:14.423961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.703 [2024-11-28 09:52:14.423966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.703 [2024-11-28 09:52:14.423972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.703 [2024-11-28 09:52:14.423977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.703 [2024-11-28 09:52:14.423982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.703 [2024-11-28 09:52:14.423987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.703 [2024-11-28 09:52:14.423993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.703 [2024-11-28 09:52:14.423998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.703 [2024-11-28 09:52:14.424004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.703 [2024-11-28 09:52:14.424011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.703 [2024-11-28 09:52:14.424023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.703 [2024-11-28 09:52:14.424029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.703 [2024-11-28 09:52:14.424034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.703 [2024-11-28 09:52:14.424040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.703 [2024-11-28 09:52:14.424045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.703 [2024-11-28 09:52:14.424051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.703 [2024-11-28 09:52:14.424056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.703 [2024-11-28 09:52:14.424061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.703 [2024-11-28 09:52:14.424067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.703 [2024-11-28 09:52:14.424096] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.703 [2024-11-28 09:52:14.424102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.703 [2024-11-28 09:52:14.424116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.703 [2024-11-28 09:52:14.424122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.703 [2024-11-28 09:52:14.424127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.703 [2024-11-28 09:52:14.424133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.424141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.703 [2024-11-28 09:52:14.424147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:19:35.703 [2024-11-28 09:52:14.424164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.448461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.448490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.703 [2024-11-28 09:52:14.448499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.243 ms 00:19:35.703 [2024-11-28 09:52:14.448506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.448607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.448616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.703 [2024-11-28 09:52:14.448623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:35.703 [2024-11-28 09:52:14.448629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.489580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.489611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.703 [2024-11-28 09:52:14.489624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.933 ms 00:19:35.703 [2024-11-28 09:52:14.489631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.489690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.489699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.703 [2024-11-28 09:52:14.489708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.703 [2024-11-28 09:52:14.489714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.490299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.490318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.703 [2024-11-28 09:52:14.490327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:19:35.703 [2024-11-28 09:52:14.490336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.490474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.490488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.703 [2024-11-28 09:52:14.490496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:35.703 [2024-11-28 09:52:14.490502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.502684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.502706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.703 [2024-11-28 09:52:14.502713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.163 ms 00:19:35.703 [2024-11-28 09:52:14.502719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.513133] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.703 [2024-11-28 09:52:14.513188] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.703 [2024-11-28 09:52:14.513200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.513207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.703 [2024-11-28 09:52:14.513214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.403 ms 00:19:35.703 [2024-11-28 09:52:14.513219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-11-28 09:52:14.531782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.703 [2024-11-28 09:52:14.531911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.703 [2024-11-28 09:52:14.531925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.505 ms 00:19:35.703 [2024-11-28 09:52:14.531932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.704 [2024-11-28 09:52:14.541334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.704 [2024-11-28 09:52:14.541360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.704 [2024-11-28 09:52:14.541368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.348 ms 00:19:35.704 [2024-11-28 09:52:14.541374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.704 [2024-11-28 09:52:14.550370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.704 [2024-11-28 09:52:14.550465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.704 [2024-11-28 09:52:14.550477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.954 ms 00:19:35.704 [2024-11-28 09:52:14.550483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.704 [2024-11-28 09:52:14.550951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.704 [2024-11-28 09:52:14.550963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.704 [2024-11-28 09:52:14.550970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:19:35.704 [2024-11-28 09:52:14.550976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.599484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.599520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.966 [2024-11-28 09:52:14.599531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.490 ms 00:19:35.966 [2024-11-28 09:52:14.599538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.607963] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.966 [2024-11-28 09:52:14.622530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.622564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.966 [2024-11-28 09:52:14.622576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.916 ms 00:19:35.966 [2024-11-28 09:52:14.622582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.622662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.622671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.966 [2024-11-28 09:52:14.622678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:35.966 [2024-11-28 09:52:14.622684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.622731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.622738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.966 [2024-11-28 09:52:14.622746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:35.966 [2024-11-28 09:52:14.622752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.622779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.622788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.966 [2024-11-28 09:52:14.622795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.966 [2024-11-28 09:52:14.622801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.622830] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.966 [2024-11-28 09:52:14.622838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.622844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.966 [2024-11-28 09:52:14.622850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.966 [2024-11-28 09:52:14.622856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.641646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.641778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.966 [2024-11-28 09:52:14.641793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.771 ms 00:19:35.966 [2024-11-28 09:52:14.641800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.641876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.966 [2024-11-28 09:52:14.641885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.966 [2024-11-28 09:52:14.641892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:35.966 [2024-11-28 09:52:14.641898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.966 [2024-11-28 09:52:14.642669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.966 [2024-11-28 09:52:14.644934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.869 ms, result 0 00:19:35.966 [2024-11-28 09:52:14.645800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.966 [2024-11-28 09:52:14.660478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.906  [2024-11-28T09:52:16.728Z] Copying: 23/256 [MB] (23 MBps) [2024-11-28T09:52:17.669Z] Copying: 45/256 [MB] (21 MBps) [2024-11-28T09:52:19.054Z] Copying: 66/256 [MB] (21 MBps) [2024-11-28T09:52:19.997Z] Copying: 86/256 [MB] (20 MBps) [2024-11-28T09:52:20.937Z] Copying: 113/256 [MB] (26 MBps) [2024-11-28T09:52:21.879Z] Copying: 133/256 [MB] (19 MBps) [2024-11-28T09:52:22.822Z] Copying: 152/256 [MB] (19 MBps) [2024-11-28T09:52:23.766Z] Copying: 167/256 [MB] (15 MBps) [2024-11-28T09:52:24.710Z] Copying: 186/256 [MB] (18 MBps) [2024-11-28T09:52:26.098Z] Copying: 205/256 [MB] (19 MBps) [2024-11-28T09:52:26.671Z] Copying: 221/256 [MB] (15 MBps) [2024-11-28T09:52:28.059Z] Copying: 231/256 [MB] (10 MBps) [2024-11-28T09:52:29.006Z] Copying: 242/256 [MB] (11 MBps) [2024-11-28T09:52:29.006Z] Copying: 254/256 [MB] (11 MBps) [2024-11-28T09:52:29.006Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-28 09:52:28.838454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.126 [2024-11-28 09:52:28.845783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.845815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.126 [2024-11-28 09:52:28.845828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.126 [2024-11-28 09:52:28.845839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.845856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.126 [2024-11-28 09:52:28.848073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.848097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.126 [2024-11-28 09:52:28.848106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:19:50.126 [2024-11-28 09:52:28.848113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.850712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.850739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.126 [2024-11-28 09:52:28.850747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:19:50.126 [2024-11-28 09:52:28.850753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.857463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.857492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.126 [2024-11-28 09:52:28.857500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.697 ms 00:19:50.126 [2024-11-28 09:52:28.857506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.862757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.862780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.126 [2024-11-28 09:52:28.862788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:19:50.126 [2024-11-28 09:52:28.862795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.880938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.880963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.126 [2024-11-28 09:52:28.880972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.101 ms 00:19:50.126 [2024-11-28 09:52:28.880979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.892791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.892821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.126 [2024-11-28 09:52:28.892832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.785 ms 00:19:50.126 [2024-11-28 09:52:28.892839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.892934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.892942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.126 [2024-11-28 09:52:28.892949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:50.126 [2024-11-28 09:52:28.892962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.911282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.911306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.126 [2024-11-28 09:52:28.911313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.308 ms 00:19:50.126 [2024-11-28 09:52:28.911319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.929139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.929176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.126 [2024-11-28 09:52:28.929184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.793 ms 00:19:50.126 [2024-11-28 09:52:28.929190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.946758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.946892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.126 [2024-11-28 09:52:28.946905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.529 ms 00:19:50.126 [2024-11-28 09:52:28.946912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.964354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.126 [2024-11-28 09:52:28.964379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.126 [2024-11-28 09:52:28.964386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.395 ms 00:19:50.126 [2024-11-28 09:52:28.964392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.126 [2024-11-28 09:52:28.964418] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.126 [2024-11-28 09:52:28.964431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.126 [2024-11-28 09:52:28.964487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.964996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.965001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.965007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.127 [2024-11-28 09:52:28.965013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.128 [2024-11-28 09:52:28.965025] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.128 [2024-11-28 09:52:28.965031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:19:50.128 [2024-11-28 09:52:28.965038] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.128 [2024-11-28 09:52:28.965044] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.128 [2024-11-28 09:52:28.965050] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.128 [2024-11-28 09:52:28.965056] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.128 [2024-11-28 09:52:28.965061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.128 [2024-11-28 09:52:28.965067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.128 [2024-11-28 09:52:28.965073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.128 [2024-11-28 09:52:28.965078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.128 [2024-11-28 09:52:28.965083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.128 [2024-11-28 09:52:28.965089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.128 [2024-11-28 09:52:28.965097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.128 [2024-11-28 09:52:28.965103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:19:50.128 [2024-11-28 09:52:28.965109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.128 [2024-11-28 09:52:28.975157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.128 [2024-11-28 09:52:28.975180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.128 [2024-11-28 09:52:28.975188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.029 ms 00:19:50.128 [2024-11-28 09:52:28.975194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.128 [2024-11-28 09:52:28.975486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.128 [2024-11-28 09:52:28.975494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.128 [2024-11-28 09:52:28.975500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:19:50.128 [2024-11-28 09:52:28.975506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.004853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.004880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.389 [2024-11-28 09:52:29.004887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.004894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.004970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.004978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.389 [2024-11-28 09:52:29.004985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.004991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.005025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.005032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.389 [2024-11-28 09:52:29.005038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.005044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.005057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.005066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.389 [2024-11-28 09:52:29.005073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.005078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.068148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.068184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.389 [2024-11-28 09:52:29.068192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.068199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.118965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.118997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.389 [2024-11-28 09:52:29.119007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.119082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.389 [2024-11-28 09:52:29.119089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.119128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.389 [2024-11-28 09:52:29.119138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.119247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.389 [2024-11-28 09:52:29.119254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.119294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.389 [2024-11-28 09:52:29.119301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.119352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.389 [2024-11-28 09:52:29.119358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.389 [2024-11-28 09:52:29.119415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.389 [2024-11-28 09:52:29.119423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.389 [2024-11-28 09:52:29.119429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.389 [2024-11-28 09:52:29.119555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.749 ms, result 0 00:19:50.963 00:19:50.963 00:19:50.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.963 09:52:29 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76631 00:19:50.963 09:52:29 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76631 00:19:50.963 09:52:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76631 ']' 00:19:50.963 09:52:29 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:50.963 09:52:29 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.963 09:52:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.963 09:52:29 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.963 09:52:29 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.963 09:52:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:51.225 [2024-11-28 09:52:29.897065] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:51.225 [2024-11-28 09:52:29.897200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76631 ] 00:19:51.225 [2024-11-28 09:52:30.052972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.486 [2024-11-28 09:52:30.172578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.061 09:52:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.061 09:52:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:52.061 09:52:30 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:52.324 [2024-11-28 09:52:31.100322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.324 [2024-11-28 09:52:31.100399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.586 [2024-11-28 09:52:31.265808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.265879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.586 [2024-11-28 09:52:31.265900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:52.586 [2024-11-28 09:52:31.265910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.269097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.269171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.586 [2024-11-28 09:52:31.269186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:19:52.586 [2024-11-28 09:52:31.269196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.269345] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.586 [2024-11-28 09:52:31.270095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.586 [2024-11-28 09:52:31.270135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.270146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.586 [2024-11-28 09:52:31.270174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:19:52.586 [2024-11-28 09:52:31.270187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.272596] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.586 [2024-11-28 09:52:31.288122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.288196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.586 [2024-11-28 09:52:31.288213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.534 ms 00:19:52.586 [2024-11-28 09:52:31.288224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.288344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.288362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.586 [2024-11-28 09:52:31.288371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:52.586 [2024-11-28 09:52:31.288385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.299943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.300000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.586 [2024-11-28 09:52:31.300012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.503 ms 00:19:52.586 [2024-11-28 09:52:31.300023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.300195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.300210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.586 [2024-11-28 09:52:31.300224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:52.586 [2024-11-28 09:52:31.300237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.300266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.300277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.586 [2024-11-28 09:52:31.300286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.586 [2024-11-28 09:52:31.300297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.586 [2024-11-28 09:52:31.300323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.586 [2024-11-28 09:52:31.304990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.586 [2024-11-28 09:52:31.305034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.587 [2024-11-28 09:52:31.305049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:19:52.587 [2024-11-28 09:52:31.305057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.587 [2024-11-28 09:52:31.305124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.587 [2024-11-28 09:52:31.305133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.587 [2024-11-28 09:52:31.305149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:52.587 [2024-11-28 09:52:31.305180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.587 [2024-11-28 09:52:31.305206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.587 [2024-11-28 09:52:31.305247] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.587 [2024-11-28 09:52:31.305298] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.587 [2024-11-28 09:52:31.305315] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.587 [2024-11-28 09:52:31.305430] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.587 [2024-11-28 09:52:31.305450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.587 [2024-11-28 09:52:31.305464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.587 [2024-11-28 09:52:31.305476] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.587 [2024-11-28 09:52:31.305489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.587 [2024-11-28 09:52:31.305498] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.587 [2024-11-28 09:52:31.305510] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.587 [2024-11-28 09:52:31.305517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.587 [2024-11-28 09:52:31.305531] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.587 [2024-11-28 09:52:31.305540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.587 [2024-11-28 09:52:31.305552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.587 [2024-11-28 09:52:31.305563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:52.587 [2024-11-28 09:52:31.305576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.587 [2024-11-28 09:52:31.305667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.587 [2024-11-28 09:52:31.305680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.587 [2024-11-28 09:52:31.305688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:52.587 [2024-11-28 09:52:31.305698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.587 [2024-11-28 09:52:31.305800] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.587 [2024-11-28 09:52:31.305816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.587 [2024-11-28 09:52:31.305825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.587 [2024-11-28 09:52:31.305835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.305847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.587 [2024-11-28 09:52:31.305857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.305863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.587 [2024-11-28 09:52:31.305879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.587 [2024-11-28 09:52:31.305886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.587 [2024-11-28 09:52:31.305897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.587 [2024-11-28 09:52:31.305905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.587 [2024-11-28 09:52:31.305916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.587 [2024-11-28 09:52:31.305922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.587 [2024-11-28 09:52:31.305932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.587 [2024-11-28 09:52:31.305942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.587 [2024-11-28 09:52:31.305952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.305958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.587 [2024-11-28 09:52:31.305969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.587 [2024-11-28 09:52:31.305984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.305993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.587 [2024-11-28 09:52:31.306000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.587 [2024-11-28 09:52:31.306016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.587 [2024-11-28 09:52:31.306027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.587 [2024-11-28 09:52:31.306044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.587 [2024-11-28 09:52:31.306051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.587 [2024-11-28 09:52:31.306067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.587 [2024-11-28 09:52:31.306075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.587 [2024-11-28 09:52:31.306095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.587 [2024-11-28 09:52:31.306101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.587 [2024-11-28 09:52:31.306117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.587 [2024-11-28 09:52:31.306126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.587 [2024-11-28 09:52:31.306132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.587 [2024-11-28 09:52:31.306140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.587 [2024-11-28 09:52:31.306147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.587 [2024-11-28 09:52:31.306521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.587 [2024-11-28 09:52:31.306594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.587 [2024-11-28 09:52:31.306614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306635] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.587 [2024-11-28 09:52:31.306658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.587 [2024-11-28 09:52:31.306680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.587 [2024-11-28 09:52:31.306703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.587 [2024-11-28 09:52:31.306792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.587 [2024-11-28 09:52:31.306818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.587 [2024-11-28 09:52:31.306840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.587 [2024-11-28 09:52:31.306860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.587 [2024-11-28 09:52:31.306882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.587 [2024-11-28 09:52:31.306901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.587 [2024-11-28 09:52:31.306925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.587 [2024-11-28 09:52:31.306960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.587 [2024-11-28 09:52:31.307067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.587 [2024-11-28 09:52:31.307103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.587 [2024-11-28 09:52:31.307133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.587 [2024-11-28 09:52:31.307183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.587 [2024-11-28 09:52:31.307212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.587 [2024-11-28 09:52:31.307245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.587 [2024-11-28 09:52:31.307273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.587 [2024-11-28 09:52:31.307305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.587 [2024-11-28 09:52:31.307335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.587 [2024-11-28 09:52:31.307564] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.587 [2024-11-28 09:52:31.307594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.587 [2024-11-28 09:52:31.307658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.588 [2024-11-28 09:52:31.307689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.588 [2024-11-28 09:52:31.307719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.588 [2024-11-28 09:52:31.307752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.307813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.588 [2024-11-28 09:52:31.307847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:19:52.588 [2024-11-28 09:52:31.307869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.346309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.346528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.588 [2024-11-28 09:52:31.346554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.347 ms 00:19:52.588 [2024-11-28 09:52:31.346564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.346713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.346725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.588 [2024-11-28 09:52:31.346738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:52.588 [2024-11-28 09:52:31.346745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.386460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.386510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.588 [2024-11-28 09:52:31.386524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.688 ms 00:19:52.588 [2024-11-28 09:52:31.386534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.386637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.386647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.588 [2024-11-28 09:52:31.386660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.588 [2024-11-28 09:52:31.386668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.387394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.387437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.588 [2024-11-28 09:52:31.387452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:19:52.588 [2024-11-28 09:52:31.387461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.387641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.387653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.588 [2024-11-28 09:52:31.387665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:52.588 [2024-11-28 09:52:31.387674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.408728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.408772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.588 [2024-11-28 09:52:31.408786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.024 ms 00:19:52.588 [2024-11-28 09:52:31.408795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.588 [2024-11-28 09:52:31.440905] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:52.588 [2024-11-28 09:52:31.440965] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.588 [2024-11-28 09:52:31.440988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.588 [2024-11-28 09:52:31.440999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.588 [2024-11-28 09:52:31.441013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.065 ms 00:19:52.588 [2024-11-28 09:52:31.441030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.849 [2024-11-28 09:52:31.467595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.849 [2024-11-28 09:52:31.467646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.849 [2024-11-28 09:52:31.467664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.455 ms 00:19:52.849 [2024-11-28 09:52:31.467676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.849 [2024-11-28 09:52:31.480707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.849 [2024-11-28 09:52:31.480753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.849 [2024-11-28 09:52:31.480771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.929 ms 00:19:52.850 [2024-11-28 09:52:31.480780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.493504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.493551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.850 [2024-11-28 09:52:31.493566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.627 ms 00:19:52.850 [2024-11-28 09:52:31.493574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.494284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.494314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.850 [2024-11-28 09:52:31.494329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:19:52.850 [2024-11-28 09:52:31.494339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.568864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.569122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.850 [2024-11-28 09:52:31.569171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.492 ms 00:19:52.850 [2024-11-28 09:52:31.569183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.582267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.850 [2024-11-28 09:52:31.606405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.606469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.850 [2024-11-28 09:52:31.606482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.991 ms 00:19:52.850 [2024-11-28 09:52:31.606493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.606628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.606642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.850 [2024-11-28 09:52:31.606654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:52.850 [2024-11-28 09:52:31.606665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.606737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.606750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.850 [2024-11-28 09:52:31.606761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:52.850 [2024-11-28 09:52:31.606772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.606801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.606816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.850 [2024-11-28 09:52:31.606824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:52.850 [2024-11-28 09:52:31.606837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.606879] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.850 [2024-11-28 09:52:31.606899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.606908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.850 [2024-11-28 09:52:31.606919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:52.850 [2024-11-28 09:52:31.606930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.634386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.634627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.850 [2024-11-28 09:52:31.634655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.423 ms 00:19:52.850 [2024-11-28 09:52:31.634666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.634802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.850 [2024-11-28 09:52:31.634817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.850 [2024-11-28 09:52:31.634831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:52.850 [2024-11-28 09:52:31.634840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.850 [2024-11-28 09:52:31.636131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.850 [2024-11-28 09:52:31.639639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 369.935 ms, result 0 00:19:52.850 [2024-11-28 09:52:31.642410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.850 Some configs were skipped because the RPC state that can call them passed over. 00:19:52.850 09:52:31 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:53.112 [2024-11-28 09:52:31.882758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.112 [2024-11-28 09:52:31.882829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:53.112 [2024-11-28 09:52:31.882843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:19:53.112 [2024-11-28 09:52:31.882855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.112 [2024-11-28 09:52:31.882892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.028 ms, result 0 00:19:53.112 true 00:19:53.112 09:52:31 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:53.373 [2024-11-28 09:52:32.094604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.373 [2024-11-28 09:52:32.094657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:53.373 [2024-11-28 09:52:32.094669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:19:53.373 [2024-11-28 09:52:32.094675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.373 [2024-11-28 09:52:32.094707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.671 ms, result 0 00:19:53.373 true 00:19:53.373 09:52:32 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76631 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76631 ']' 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76631 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76631 00:19:53.373 killing process with pid 76631 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76631' 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76631 00:19:53.373 09:52:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76631 00:19:53.946 [2024-11-28 09:52:32.718506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.718559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.946 [2024-11-28 09:52:32.718571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.946 [2024-11-28 09:52:32.718581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.718599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.946 [2024-11-28 09:52:32.720747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.720773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.946 [2024-11-28 09:52:32.720785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.132 ms 00:19:53.946 [2024-11-28 09:52:32.720792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.721026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.721035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.946 [2024-11-28 09:52:32.721043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:19:53.946 [2024-11-28 09:52:32.721049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.724635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.724802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.946 [2024-11-28 09:52:32.724819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.568 ms 00:19:53.946 [2024-11-28 09:52:32.724826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.730139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.730170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.946 [2024-11-28 09:52:32.730181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.277 ms 00:19:53.946 [2024-11-28 09:52:32.730188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.738592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.738623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.946 [2024-11-28 09:52:32.738634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.347 ms 00:19:53.946 [2024-11-28 09:52:32.738640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.746136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.746174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.946 [2024-11-28 09:52:32.746185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.463 ms 00:19:53.946 [2024-11-28 09:52:32.746192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.746310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.746319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.946 [2024-11-28 09:52:32.746327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:53.946 [2024-11-28 09:52:32.746334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.754924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.754949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.946 [2024-11-28 09:52:32.754958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.574 ms 00:19:53.946 [2024-11-28 09:52:32.754964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.762931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.763039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.946 [2024-11-28 09:52:32.763055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.936 ms 00:19:53.946 [2024-11-28 09:52:32.763061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.770606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.770629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.946 [2024-11-28 09:52:32.770638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.515 ms 00:19:53.946 [2024-11-28 09:52:32.770644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.778022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.946 [2024-11-28 09:52:32.778114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.946 [2024-11-28 09:52:32.778128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.325 ms 00:19:53.946 [2024-11-28 09:52:32.778134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.946 [2024-11-28 09:52:32.778182] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.946 [2024-11-28 09:52:32.778196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.946 [2024-11-28 09:52:32.778387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.947 [2024-11-28 09:52:32.778878] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.947 [2024-11-28 09:52:32.778890] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:19:53.947 [2024-11-28 09:52:32.778896] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.947 [2024-11-28 09:52:32.778904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.947 [2024-11-28 09:52:32.778910] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.947 [2024-11-28 09:52:32.778918] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.947 [2024-11-28 09:52:32.778924] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.947 [2024-11-28 09:52:32.778931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.947 [2024-11-28 09:52:32.778937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.947 [2024-11-28 09:52:32.778944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.947 [2024-11-28 09:52:32.778948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.947 [2024-11-28 09:52:32.778957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.947 [2024-11-28 09:52:32.778963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.947 [2024-11-28 09:52:32.778973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:19:53.947 [2024-11-28 09:52:32.778978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.947 [2024-11-28 09:52:32.789123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.947 [2024-11-28 09:52:32.789247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.947 [2024-11-28 09:52:32.789265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.128 ms 00:19:53.947 [2024-11-28 09:52:32.789271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.947 [2024-11-28 09:52:32.789582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.947 [2024-11-28 09:52:32.789594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.948 [2024-11-28 09:52:32.789603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:53.948 [2024-11-28 09:52:32.789609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.207 [2024-11-28 09:52:32.826422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.826448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.208 [2024-11-28 09:52:32.826458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.826466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.826549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.826559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.208 [2024-11-28 09:52:32.826567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.826573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.826612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.826620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.208 [2024-11-28 09:52:32.826631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.826637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.826653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.826659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.208 [2024-11-28 09:52:32.826669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.826674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.890900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.890935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.208 [2024-11-28 09:52:32.890945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.890951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.943825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.943860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.208 [2024-11-28 09:52:32.943873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.943879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.943955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.943963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.208 [2024-11-28 09:52:32.943973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.943980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.944006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.944012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.208 [2024-11-28 09:52:32.944020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.944027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.944103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.944111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.208 [2024-11-28 09:52:32.944119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.944125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.944175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.944183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.208 [2024-11-28 09:52:32.944191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.944198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.944237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.944245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.208 [2024-11-28 09:52:32.944254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.944260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.944314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.208 [2024-11-28 09:52:32.944323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.208 [2024-11-28 09:52:32.944332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.208 [2024-11-28 09:52:32.944339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.208 [2024-11-28 09:52:32.944460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 225.932 ms, result 0 00:19:54.778 09:52:33 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:54.778 09:52:33 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.778 [2024-11-28 09:52:33.561595] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:54.778 [2024-11-28 09:52:33.561826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76688 ] 00:19:55.039 [2024-11-28 09:52:33.718724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.039 [2024-11-28 09:52:33.805434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.300 [2024-11-28 09:52:34.038144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.300 [2024-11-28 09:52:34.038209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.562 [2024-11-28 09:52:34.194591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.194763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.562 [2024-11-28 09:52:34.194779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.562 [2024-11-28 09:52:34.194786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.196994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.197024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.562 [2024-11-28 09:52:34.197031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:19:55.562 [2024-11-28 09:52:34.197038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.197101] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.562 [2024-11-28 09:52:34.197646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.562 [2024-11-28 09:52:34.197665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.197672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.562 [2024-11-28 09:52:34.197680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:19:55.562 [2024-11-28 09:52:34.197686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.198962] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.562 [2024-11-28 09:52:34.209412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.209527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.562 [2024-11-28 09:52:34.209542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.451 ms 00:19:55.562 [2024-11-28 09:52:34.209549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.209620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.209629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.562 [2024-11-28 09:52:34.209636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:55.562 [2024-11-28 09:52:34.209642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.216022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.216047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.562 [2024-11-28 09:52:34.216055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.349 ms 00:19:55.562 [2024-11-28 09:52:34.216061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.216136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.216144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.562 [2024-11-28 09:52:34.216163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:55.562 [2024-11-28 09:52:34.216170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.216193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.216200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.562 [2024-11-28 09:52:34.216207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.562 [2024-11-28 09:52:34.216213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.216230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.562 [2024-11-28 09:52:34.219241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.562 [2024-11-28 09:52:34.219263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.562 [2024-11-28 09:52:34.219271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:19:55.562 [2024-11-28 09:52:34.219277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.562 [2024-11-28 09:52:34.219307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.563 [2024-11-28 09:52:34.219313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.563 [2024-11-28 09:52:34.219320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.563 [2024-11-28 09:52:34.219325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.563 [2024-11-28 09:52:34.219342] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.563 [2024-11-28 09:52:34.219359] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:55.563 [2024-11-28 09:52:34.219387] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.563 [2024-11-28 09:52:34.219400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:55.563 [2024-11-28 09:52:34.219482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.563 [2024-11-28 09:52:34.219490] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.563 [2024-11-28 09:52:34.219499] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.563 [2024-11-28 09:52:34.219510] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219517] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219524] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.563 [2024-11-28 09:52:34.219531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.563 [2024-11-28 09:52:34.219537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.563 [2024-11-28 09:52:34.219543] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.563 [2024-11-28 09:52:34.219549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.563 [2024-11-28 09:52:34.219555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.563 [2024-11-28 09:52:34.219560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:19:55.563 [2024-11-28 09:52:34.219566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.563 [2024-11-28 09:52:34.219645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.563 [2024-11-28 09:52:34.219654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.563 [2024-11-28 09:52:34.219661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:55.563 [2024-11-28 09:52:34.219667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.563 [2024-11-28 09:52:34.219747] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.563 [2024-11-28 09:52:34.219756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.563 [2024-11-28 09:52:34.219763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.563 [2024-11-28 09:52:34.219781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.563 [2024-11-28 09:52:34.219800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.563 [2024-11-28 09:52:34.219813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.563 [2024-11-28 09:52:34.219824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.563 [2024-11-28 09:52:34.219829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.563 [2024-11-28 09:52:34.219835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.563 [2024-11-28 09:52:34.219841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:55.563 [2024-11-28 09:52:34.219846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.563 [2024-11-28 09:52:34.219856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.563 [2024-11-28 09:52:34.219872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.563 [2024-11-28 09:52:34.219888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.563 [2024-11-28 09:52:34.219904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.563 [2024-11-28 09:52:34.219919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.563 [2024-11-28 09:52:34.219929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.563 [2024-11-28 09:52:34.219935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.563 [2024-11-28 09:52:34.219945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.563 [2024-11-28 09:52:34.219950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:55.563 [2024-11-28 09:52:34.219955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.563 [2024-11-28 09:52:34.219960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.563 [2024-11-28 09:52:34.219964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:55.563 [2024-11-28 09:52:34.219970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.563 [2024-11-28 09:52:34.219981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:55.563 [2024-11-28 09:52:34.219987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.219993] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.563 [2024-11-28 09:52:34.219999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.563 [2024-11-28 09:52:34.220007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.563 [2024-11-28 09:52:34.220014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.563 [2024-11-28 09:52:34.220020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.563 [2024-11-28 09:52:34.220026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.563 [2024-11-28 09:52:34.220031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.563 [2024-11-28 09:52:34.220036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.563 [2024-11-28 09:52:34.220041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.563 [2024-11-28 09:52:34.220046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.563 [2024-11-28 09:52:34.220053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.563 [2024-11-28 09:52:34.220061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.563 [2024-11-28 09:52:34.220072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:55.563 [2024-11-28 09:52:34.220078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.563 [2024-11-28 09:52:34.220083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:55.563 [2024-11-28 09:52:34.220089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:55.563 [2024-11-28 09:52:34.220094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:55.563 [2024-11-28 09:52:34.220099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:55.563 [2024-11-28 09:52:34.220104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:55.563 [2024-11-28 09:52:34.220109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:55.563 [2024-11-28 09:52:34.220114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:55.563 [2024-11-28 09:52:34.220141] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.563 [2024-11-28 09:52:34.220147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.563 [2024-11-28 09:52:34.220382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.563 [2024-11-28 09:52:34.220405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.564 [2024-11-28 09:52:34.220428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.564 [2024-11-28 09:52:34.220451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.220470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.564 [2024-11-28 09:52:34.220486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:19:55.564 [2024-11-28 09:52:34.220501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.244791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.244903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.564 [2024-11-28 09:52:34.244947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.220 ms 00:19:55.564 [2024-11-28 09:52:34.244965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.245074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.245094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.564 [2024-11-28 09:52:34.245109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:55.564 [2024-11-28 09:52:34.245124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.286076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.286206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.564 [2024-11-28 09:52:34.286264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.870 ms 00:19:55.564 [2024-11-28 09:52:34.286285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.286371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.286395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.564 [2024-11-28 09:52:34.286411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:55.564 [2024-11-28 09:52:34.286426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.286880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.286966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.564 [2024-11-28 09:52:34.287017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:19:55.564 [2024-11-28 09:52:34.287034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.287170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.287190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.564 [2024-11-28 09:52:34.287206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:55.564 [2024-11-28 09:52:34.287253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.299514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.299601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.564 [2024-11-28 09:52:34.299614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.231 ms 00:19:55.564 [2024-11-28 09:52:34.299620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.310486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.564 [2024-11-28 09:52:34.310513] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.564 [2024-11-28 09:52:34.310523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.310530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.564 [2024-11-28 09:52:34.310537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.824 ms 00:19:55.564 [2024-11-28 09:52:34.310543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.329373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.329468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.564 [2024-11-28 09:52:34.329481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.774 ms 00:19:55.564 [2024-11-28 09:52:34.329488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.338719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.338745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.564 [2024-11-28 09:52:34.338752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.177 ms 00:19:55.564 [2024-11-28 09:52:34.338758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.347645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.347669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.564 [2024-11-28 09:52:34.347677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.846 ms 00:19:55.564 [2024-11-28 09:52:34.347683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.348148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.348170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.564 [2024-11-28 09:52:34.348177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:19:55.564 [2024-11-28 09:52:34.348184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.397116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.397148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.564 [2024-11-28 09:52:34.397168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.913 ms 00:19:55.564 [2024-11-28 09:52:34.397175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.405658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.564 [2024-11-28 09:52:34.420321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.420349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.564 [2024-11-28 09:52:34.420359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.069 ms 00:19:55.564 [2024-11-28 09:52:34.420369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.420437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.420445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.564 [2024-11-28 09:52:34.420452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:55.564 [2024-11-28 09:52:34.420458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.420501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.420508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.564 [2024-11-28 09:52:34.420516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:55.564 [2024-11-28 09:52:34.420524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.420551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.420559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.564 [2024-11-28 09:52:34.420565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:55.564 [2024-11-28 09:52:34.420571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.420599] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.564 [2024-11-28 09:52:34.420606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.420612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.564 [2024-11-28 09:52:34.420619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:55.564 [2024-11-28 09:52:34.420625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.439447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.439575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.564 [2024-11-28 09:52:34.439589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.802 ms 00:19:55.564 [2024-11-28 09:52:34.439596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.439671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.564 [2024-11-28 09:52:34.439681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.564 [2024-11-28 09:52:34.439688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:55.564 [2024-11-28 09:52:34.439694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-11-28 09:52:34.440475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.824 [2024-11-28 09:52:34.442838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 245.627 ms, result 0 00:19:55.824 [2024-11-28 09:52:34.443964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.824 [2024-11-28 09:52:34.454747] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.768  [2024-11-28T09:52:36.588Z] Copying: 15/256 [MB] (15 MBps) [2024-11-28T09:52:37.532Z] Copying: 26/256 [MB] (11 MBps) [2024-11-28T09:52:38.476Z] Copying: 37/256 [MB] (11 MBps) [2024-11-28T09:52:39.862Z] Copying: 49/256 [MB] (11 MBps) [2024-11-28T09:52:40.491Z] Copying: 59/256 [MB] (10 MBps) [2024-11-28T09:52:41.880Z] Copying: 70/256 [MB] (11 MBps) [2024-11-28T09:52:42.826Z] Copying: 86/256 [MB] (15 MBps) [2024-11-28T09:52:43.772Z] Copying: 99/256 [MB] (12 MBps) [2024-11-28T09:52:44.718Z] Copying: 110/256 [MB] (11 MBps) [2024-11-28T09:52:45.668Z] Copying: 122/256 [MB] (12 MBps) [2024-11-28T09:52:46.614Z] Copying: 135/256 [MB] (12 MBps) [2024-11-28T09:52:47.558Z] Copying: 145/256 [MB] (10 MBps) [2024-11-28T09:52:48.502Z] Copying: 156/256 [MB] (10 MBps) [2024-11-28T09:52:49.889Z] Copying: 168/256 [MB] (11 MBps) [2024-11-28T09:52:50.460Z] Copying: 179/256 [MB] (11 MBps) [2024-11-28T09:52:51.848Z] Copying: 191/256 [MB] (11 MBps) [2024-11-28T09:52:52.793Z] Copying: 202/256 [MB] (11 MBps) [2024-11-28T09:52:53.734Z] Copying: 213/256 [MB] (11 MBps) [2024-11-28T09:52:54.676Z] Copying: 224/256 [MB] (10 MBps) [2024-11-28T09:52:55.618Z] Copying: 236/256 [MB] (11 MBps) [2024-11-28T09:52:56.191Z] Copying: 247/256 [MB] (11 MBps) [2024-11-28T09:52:56.191Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-28 09:52:56.147217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.311 [2024-11-28 09:52:56.154740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.311 [2024-11-28 09:52:56.154770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.311 [2024-11-28 09:52:56.154788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.311 [2024-11-28 09:52:56.154795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.311 [2024-11-28 09:52:56.154812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.311 [2024-11-28 09:52:56.157031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.311 [2024-11-28 09:52:56.157054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.311 [2024-11-28 09:52:56.157063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:20:17.311 [2024-11-28 09:52:56.157070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.311 [2024-11-28 09:52:56.157304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.311 [2024-11-28 09:52:56.157314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.312 [2024-11-28 09:52:56.157321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:20:17.312 [2024-11-28 09:52:56.157327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.312 [2024-11-28 09:52:56.160081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.312 [2024-11-28 09:52:56.160097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.312 [2024-11-28 09:52:56.160105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:20:17.312 [2024-11-28 09:52:56.160112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.312 [2024-11-28 09:52:56.165365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.312 [2024-11-28 09:52:56.165390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.312 [2024-11-28 09:52:56.165398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:20:17.312 [2024-11-28 09:52:56.165404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.312 [2024-11-28 09:52:56.183894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.312 [2024-11-28 09:52:56.184032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.312 [2024-11-28 09:52:56.184046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.454 ms 00:20:17.312 [2024-11-28 09:52:56.184052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.196322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.575 [2024-11-28 09:52:56.196347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.575 [2024-11-28 09:52:56.196359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.244 ms 00:20:17.575 [2024-11-28 09:52:56.196366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.196461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.575 [2024-11-28 09:52:56.196469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.575 [2024-11-28 09:52:56.196483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:17.575 [2024-11-28 09:52:56.196488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.215586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.575 [2024-11-28 09:52:56.215611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.575 [2024-11-28 09:52:56.215620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.085 ms 00:20:17.575 [2024-11-28 09:52:56.215625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.233474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.575 [2024-11-28 09:52:56.233497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.575 [2024-11-28 09:52:56.233505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.820 ms 00:20:17.575 [2024-11-28 09:52:56.233510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.251027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.575 [2024-11-28 09:52:56.251118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.575 [2024-11-28 09:52:56.251174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.489 ms 00:20:17.575 [2024-11-28 09:52:56.251193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.268521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.575 [2024-11-28 09:52:56.268614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.575 [2024-11-28 09:52:56.268655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.232 ms 00:20:17.575 [2024-11-28 09:52:56.268672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.575 [2024-11-28 09:52:56.268703] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.575 [2024-11-28 09:52:56.268725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.575 [2024-11-28 09:52:56.268750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.575 [2024-11-28 09:52:56.268773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.575 [2024-11-28 09:52:56.268795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.575 [2024-11-28 09:52:56.268844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.268998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.269988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.576 [2024-11-28 09:52:56.270918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.577 [2024-11-28 09:52:56.270988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.577 [2024-11-28 09:52:56.270995] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:20:17.577 [2024-11-28 09:52:56.271002] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.577 [2024-11-28 09:52:56.271007] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.577 [2024-11-28 09:52:56.271013] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.577 [2024-11-28 09:52:56.271019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.577 [2024-11-28 09:52:56.271025] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.577 [2024-11-28 09:52:56.271031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.577 [2024-11-28 09:52:56.271040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.577 [2024-11-28 09:52:56.271046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.577 [2024-11-28 09:52:56.271051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.577 [2024-11-28 09:52:56.271056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.577 [2024-11-28 09:52:56.271062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.577 [2024-11-28 09:52:56.271069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.353 ms 00:20:17.577 [2024-11-28 09:52:56.271075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.281311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.577 [2024-11-28 09:52:56.281334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.577 [2024-11-28 09:52:56.281342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.220 ms 00:20:17.577 [2024-11-28 09:52:56.281349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.281648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.577 [2024-11-28 09:52:56.281659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.577 [2024-11-28 09:52:56.281666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:17.577 [2024-11-28 09:52:56.281671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.310908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.310936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.577 [2024-11-28 09:52:56.310944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.310954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.311023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.311030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.577 [2024-11-28 09:52:56.311036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.311042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.311081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.311089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.577 [2024-11-28 09:52:56.311096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.311102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.311119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.311125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.577 [2024-11-28 09:52:56.311131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.311136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.373986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.374019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.577 [2024-11-28 09:52:56.374030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.374037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.425875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.577 [2024-11-28 09:52:56.426057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.577 [2024-11-28 09:52:56.426149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.577 [2024-11-28 09:52:56.426209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.577 [2024-11-28 09:52:56.426312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.577 [2024-11-28 09:52:56.426361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.577 [2024-11-28 09:52:56.426415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.577 [2024-11-28 09:52:56.426472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.577 [2024-11-28 09:52:56.426478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.577 [2024-11-28 09:52:56.426484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.577 [2024-11-28 09:52:56.426610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.848 ms, result 0 00:20:18.150 00:20:18.150 00:20:18.150 09:52:57 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:18.150 09:52:57 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:18.723 09:52:57 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:18.984 [2024-11-28 09:52:57.628947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:18.984 [2024-11-28 09:52:57.629070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76941 ] 00:20:18.984 [2024-11-28 09:52:57.784267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.244 [2024-11-28 09:52:57.886558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.244 [2024-11-28 09:52:58.118744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.244 [2024-11-28 09:52:58.118802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.507 [2024-11-28 09:52:58.275132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.275183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.507 [2024-11-28 09:52:58.275196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.507 [2024-11-28 09:52:58.275202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.277428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.277577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.507 [2024-11-28 09:52:58.277591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:20:19.507 [2024-11-28 09:52:58.277597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.277659] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.507 [2024-11-28 09:52:58.278197] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.507 [2024-11-28 09:52:58.278210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.278217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.507 [2024-11-28 09:52:58.278224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:19.507 [2024-11-28 09:52:58.278230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.279988] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.507 [2024-11-28 09:52:58.290375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.290405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.507 [2024-11-28 09:52:58.290421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.390 ms 00:20:19.507 [2024-11-28 09:52:58.290427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.290499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.290509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.507 [2024-11-28 09:52:58.290516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:19.507 [2024-11-28 09:52:58.290522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.296801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.296930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.507 [2024-11-28 09:52:58.296943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.250 ms 00:20:19.507 [2024-11-28 09:52:58.296950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.297028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.297036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.507 [2024-11-28 09:52:58.297043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.507 [2024-11-28 09:52:58.297049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.297070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.297077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.507 [2024-11-28 09:52:58.297084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.507 [2024-11-28 09:52:58.297091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.297113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:19.507 [2024-11-28 09:52:58.300163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.300185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.507 [2024-11-28 09:52:58.300193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.057 ms 00:20:19.507 [2024-11-28 09:52:58.300199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.300230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.300237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.507 [2024-11-28 09:52:58.300243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.507 [2024-11-28 09:52:58.300250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.300267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.507 [2024-11-28 09:52:58.300284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.507 [2024-11-28 09:52:58.300313] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.507 [2024-11-28 09:52:58.300326] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.507 [2024-11-28 09:52:58.300409] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.507 [2024-11-28 09:52:58.300418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.507 [2024-11-28 09:52:58.300426] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.507 [2024-11-28 09:52:58.300437] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.507 [2024-11-28 09:52:58.300444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.507 [2024-11-28 09:52:58.300451] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:19.507 [2024-11-28 09:52:58.300457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.507 [2024-11-28 09:52:58.300463] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.507 [2024-11-28 09:52:58.300469] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.507 [2024-11-28 09:52:58.300476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.300482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.507 [2024-11-28 09:52:58.300488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:20:19.507 [2024-11-28 09:52:58.300493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.300571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.507 [2024-11-28 09:52:58.300581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.507 [2024-11-28 09:52:58.300588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:19.507 [2024-11-28 09:52:58.300593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.507 [2024-11-28 09:52:58.300672] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.507 [2024-11-28 09:52:58.300680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.507 [2024-11-28 09:52:58.300687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.507 [2024-11-28 09:52:58.300693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.507 [2024-11-28 09:52:58.300700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.507 [2024-11-28 09:52:58.300705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.507 [2024-11-28 09:52:58.300710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:19.507 [2024-11-28 09:52:58.300717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.507 [2024-11-28 09:52:58.300723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:19.507 [2024-11-28 09:52:58.300728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.507 [2024-11-28 09:52:58.300734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.507 [2024-11-28 09:52:58.300744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:19.507 [2024-11-28 09:52:58.300749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.507 [2024-11-28 09:52:58.300756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.507 [2024-11-28 09:52:58.300762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:19.507 [2024-11-28 09:52:58.300767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.507 [2024-11-28 09:52:58.300773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.507 [2024-11-28 09:52:58.300778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.508 [2024-11-28 09:52:58.300793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.508 [2024-11-28 09:52:58.300809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.508 [2024-11-28 09:52:58.300824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.508 [2024-11-28 09:52:58.300838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.508 [2024-11-28 09:52:58.300853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.508 [2024-11-28 09:52:58.300862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.508 [2024-11-28 09:52:58.300867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:19.508 [2024-11-28 09:52:58.300871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.508 [2024-11-28 09:52:58.300877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.508 [2024-11-28 09:52:58.300882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:19.508 [2024-11-28 09:52:58.300886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.508 [2024-11-28 09:52:58.300896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:19.508 [2024-11-28 09:52:58.300902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.508 [2024-11-28 09:52:58.300913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.508 [2024-11-28 09:52:58.300922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.508 [2024-11-28 09:52:58.300934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.508 [2024-11-28 09:52:58.300940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.508 [2024-11-28 09:52:58.300945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.508 [2024-11-28 09:52:58.300950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.508 [2024-11-28 09:52:58.300955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.508 [2024-11-28 09:52:58.300960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.508 [2024-11-28 09:52:58.300967] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.508 [2024-11-28 09:52:58.300974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.300982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:19.508 [2024-11-28 09:52:58.300987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:19.508 [2024-11-28 09:52:58.300992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:19.508 [2024-11-28 09:52:58.300998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:19.508 [2024-11-28 09:52:58.301003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:19.508 [2024-11-28 09:52:58.301008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:19.508 [2024-11-28 09:52:58.301014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:19.508 [2024-11-28 09:52:58.301019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:19.508 [2024-11-28 09:52:58.301025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:19.508 [2024-11-28 09:52:58.301030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.301036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.301041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.301046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.301051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:19.508 [2024-11-28 09:52:58.301056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.508 [2024-11-28 09:52:58.301063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.301069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.508 [2024-11-28 09:52:58.301074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.508 [2024-11-28 09:52:58.301080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.508 [2024-11-28 09:52:58.301086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.508 [2024-11-28 09:52:58.301091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.508 [2024-11-28 09:52:58.301099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.508 [2024-11-28 09:52:58.301106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:20:19.508 [2024-11-28 09:52:58.301112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.508 [2024-11-28 09:52:58.325282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.508 [2024-11-28 09:52:58.325308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.508 [2024-11-28 09:52:58.325316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.097 ms 00:20:19.508 [2024-11-28 09:52:58.325323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.508 [2024-11-28 09:52:58.325420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.508 [2024-11-28 09:52:58.325427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.508 [2024-11-28 09:52:58.325434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:19.508 [2024-11-28 09:52:58.325441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.508 [2024-11-28 09:52:58.376098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.508 [2024-11-28 09:52:58.376131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.508 [2024-11-28 09:52:58.376144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.639 ms 00:20:19.508 [2024-11-28 09:52:58.376162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.508 [2024-11-28 09:52:58.376240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.508 [2024-11-28 09:52:58.376250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.508 [2024-11-28 09:52:58.376257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.508 [2024-11-28 09:52:58.376263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.508 [2024-11-28 09:52:58.376659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.508 [2024-11-28 09:52:58.376678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.508 [2024-11-28 09:52:58.376692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:20:19.509 [2024-11-28 09:52:58.376698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.509 [2024-11-28 09:52:58.376820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.509 [2024-11-28 09:52:58.376833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.509 [2024-11-28 09:52:58.376840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:19.509 [2024-11-28 09:52:58.376846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.389103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.389130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.771 [2024-11-28 09:52:58.389138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.241 ms 00:20:19.771 [2024-11-28 09:52:58.389144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.399875] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:19.771 [2024-11-28 09:52:58.400019] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.771 [2024-11-28 09:52:58.400032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.400039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.771 [2024-11-28 09:52:58.400046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.798 ms 00:20:19.771 [2024-11-28 09:52:58.400052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.418851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.418961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.771 [2024-11-28 09:52:58.418974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.744 ms 00:20:19.771 [2024-11-28 09:52:58.418980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.428253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.428279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.771 [2024-11-28 09:52:58.428287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.218 ms 00:20:19.771 [2024-11-28 09:52:58.428293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.437376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.437467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.771 [2024-11-28 09:52:58.437478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.044 ms 00:20:19.771 [2024-11-28 09:52:58.437484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.437958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.437975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.771 [2024-11-28 09:52:58.437983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:20:19.771 [2024-11-28 09:52:58.437990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.486173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.486204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.771 [2024-11-28 09:52:58.486214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.165 ms 00:20:19.771 [2024-11-28 09:52:58.486221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.494142] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:19.771 [2024-11-28 09:52:58.508593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.508733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.771 [2024-11-28 09:52:58.508747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.303 ms 00:20:19.771 [2024-11-28 09:52:58.508757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.508827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.508836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.771 [2024-11-28 09:52:58.508843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:19.771 [2024-11-28 09:52:58.508849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.508891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.508899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.771 [2024-11-28 09:52:58.508905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:19.771 [2024-11-28 09:52:58.508915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.508942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.508949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.771 [2024-11-28 09:52:58.508956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.771 [2024-11-28 09:52:58.508962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.508989] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.771 [2024-11-28 09:52:58.508997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.509003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.771 [2024-11-28 09:52:58.509009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.771 [2024-11-28 09:52:58.509015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.528177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.528204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.771 [2024-11-28 09:52:58.528213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.146 ms 00:20:19.771 [2024-11-28 09:52:58.528220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.528295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.771 [2024-11-28 09:52:58.528304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.771 [2024-11-28 09:52:58.528311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:19.771 [2024-11-28 09:52:58.528317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.771 [2024-11-28 09:52:58.529403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.771 [2024-11-28 09:52:58.531780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 253.989 ms, result 0 00:20:19.771 [2024-11-28 09:52:58.532823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.771 [2024-11-28 09:52:58.543687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.038  [2024-11-28T09:52:58.918Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-11-28 09:52:58.906874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.038 [2024-11-28 09:52:58.913323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.039 [2024-11-28 09:52:58.913427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.039 [2024-11-28 09:52:58.913445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:20.039 [2024-11-28 09:52:58.913451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.039 [2024-11-28 09:52:58.913470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:20.300 [2024-11-28 09:52:58.915597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.915618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.300 [2024-11-28 09:52:58.915626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:20:20.300 [2024-11-28 09:52:58.915633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.300 [2024-11-28 09:52:58.918014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.918040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.300 [2024-11-28 09:52:58.918047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.365 ms 00:20:20.300 [2024-11-28 09:52:58.918054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.300 [2024-11-28 09:52:58.921556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.921578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.300 [2024-11-28 09:52:58.921585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:20:20.300 [2024-11-28 09:52:58.921591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.300 [2024-11-28 09:52:58.926812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.926839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.300 [2024-11-28 09:52:58.926848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.201 ms 00:20:20.300 [2024-11-28 09:52:58.926854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.300 [2024-11-28 09:52:58.944811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.944914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.300 [2024-11-28 09:52:58.944926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.917 ms 00:20:20.300 [2024-11-28 09:52:58.944932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.300 [2024-11-28 09:52:58.956752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.956781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.300 [2024-11-28 09:52:58.956789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.796 ms 00:20:20.300 [2024-11-28 09:52:58.956796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.300 [2024-11-28 09:52:58.956889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.300 [2024-11-28 09:52:58.956896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.300 [2024-11-28 09:52:58.956909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:20.301 [2024-11-28 09:52:58.956915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.301 [2024-11-28 09:52:58.975373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.301 [2024-11-28 09:52:58.975473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:20.301 [2024-11-28 09:52:58.975485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:20:20.301 [2024-11-28 09:52:58.975491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.301 [2024-11-28 09:52:58.993431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.301 [2024-11-28 09:52:58.993453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:20.301 [2024-11-28 09:52:58.993460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.916 ms 00:20:20.301 [2024-11-28 09:52:58.993466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.301 [2024-11-28 09:52:59.011164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.301 [2024-11-28 09:52:59.011186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.301 [2024-11-28 09:52:59.011194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.672 ms 00:20:20.301 [2024-11-28 09:52:59.011199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.301 [2024-11-28 09:52:59.028966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.301 [2024-11-28 09:52:59.028989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.301 [2024-11-28 09:52:59.028997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.721 ms 00:20:20.301 [2024-11-28 09:52:59.029003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.301 [2024-11-28 09:52:59.029028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.301 [2024-11-28 09:52:59.029039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.301 [2024-11-28 09:52:59.029488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.302 [2024-11-28 09:52:59.029657] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.302 [2024-11-28 09:52:59.029663] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:20:20.302 [2024-11-28 09:52:59.029669] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.302 [2024-11-28 09:52:59.029675] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.302 [2024-11-28 09:52:59.029681] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.302 [2024-11-28 09:52:59.029687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.302 [2024-11-28 09:52:59.029693] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.302 [2024-11-28 09:52:59.029698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.302 [2024-11-28 09:52:59.029706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.302 [2024-11-28 09:52:59.029710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.302 [2024-11-28 09:52:59.029716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.302 [2024-11-28 09:52:59.029722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.302 [2024-11-28 09:52:59.029729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.302 [2024-11-28 09:52:59.029735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:20:20.302 [2024-11-28 09:52:59.029741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.038994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.302 [2024-11-28 09:52:59.039097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.302 [2024-11-28 09:52:59.039109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.241 ms 00:20:20.302 [2024-11-28 09:52:59.039115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.039425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.302 [2024-11-28 09:52:59.039435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.302 [2024-11-28 09:52:59.039442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:20.302 [2024-11-28 09:52:59.039448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.068488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.302 [2024-11-28 09:52:59.068515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.302 [2024-11-28 09:52:59.068523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.302 [2024-11-28 09:52:59.068532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.068586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.302 [2024-11-28 09:52:59.068593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.302 [2024-11-28 09:52:59.068599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.302 [2024-11-28 09:52:59.068605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.068640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.302 [2024-11-28 09:52:59.068647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.302 [2024-11-28 09:52:59.068653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.302 [2024-11-28 09:52:59.068659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.068675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.302 [2024-11-28 09:52:59.068681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.302 [2024-11-28 09:52:59.068687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.302 [2024-11-28 09:52:59.068692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.302 [2024-11-28 09:52:59.131664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.302 [2024-11-28 09:52:59.131700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.302 [2024-11-28 09:52:59.131711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.302 [2024-11-28 09:52:59.131720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.563 [2024-11-28 09:52:59.183738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.563 [2024-11-28 09:52:59.183773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.563 [2024-11-28 09:52:59.183782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.563 [2024-11-28 09:52:59.183789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.563 [2024-11-28 09:52:59.183835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.563 [2024-11-28 09:52:59.183843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.564 [2024-11-28 09:52:59.183849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.564 [2024-11-28 09:52:59.183856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.564 [2024-11-28 09:52:59.183881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.564 [2024-11-28 09:52:59.183891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.564 [2024-11-28 09:52:59.183898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.564 [2024-11-28 09:52:59.183904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.564 [2024-11-28 09:52:59.183980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.564 [2024-11-28 09:52:59.183989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.564 [2024-11-28 09:52:59.183995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.564 [2024-11-28 09:52:59.184002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.564 [2024-11-28 09:52:59.184029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.564 [2024-11-28 09:52:59.184036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.564 [2024-11-28 09:52:59.184045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.564 [2024-11-28 09:52:59.184051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.564 [2024-11-28 09:52:59.184086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.564 [2024-11-28 09:52:59.184094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.564 [2024-11-28 09:52:59.184100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.564 [2024-11-28 09:52:59.184106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.564 [2024-11-28 09:52:59.184144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.564 [2024-11-28 09:52:59.184168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.564 [2024-11-28 09:52:59.184176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.564 [2024-11-28 09:52:59.184182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.564 [2024-11-28 09:52:59.184308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.964 ms, result 0 00:20:21.137 00:20:21.137 00:20:21.137 09:52:59 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76966 00:20:21.137 09:52:59 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76966 00:20:21.137 09:52:59 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:21.137 09:52:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76966 ']' 00:20:21.137 09:52:59 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.137 09:52:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.137 09:52:59 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.137 09:52:59 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.137 09:52:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:21.137 [2024-11-28 09:52:59.874954] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:21.137 [2024-11-28 09:52:59.875299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76966 ] 00:20:21.397 [2024-11-28 09:53:00.032379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.397 [2024-11-28 09:53:00.143289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.970 09:53:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.970 09:53:00 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:21.970 09:53:00 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:22.232 [2024-11-28 09:53:00.894213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.232 [2024-11-28 09:53:00.894270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.232 [2024-11-28 09:53:01.066661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.066699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.232 [2024-11-28 09:53:01.066712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.232 [2024-11-28 09:53:01.066719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.068904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.068933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.232 [2024-11-28 09:53:01.068943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:20:22.232 [2024-11-28 09:53:01.068949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.069010] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.232 [2024-11-28 09:53:01.069666] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.232 [2024-11-28 09:53:01.069695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.069702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.232 [2024-11-28 09:53:01.069710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:20:22.232 [2024-11-28 09:53:01.069718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.071007] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.232 [2024-11-28 09:53:01.081331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.081361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.232 [2024-11-28 09:53:01.081370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.328 ms 00:20:22.232 [2024-11-28 09:53:01.081383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.081447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.081457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.232 [2024-11-28 09:53:01.081464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:22.232 [2024-11-28 09:53:01.081472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.087702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.087731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.232 [2024-11-28 09:53:01.087739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:20:22.232 [2024-11-28 09:53:01.087747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.087824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.087834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.232 [2024-11-28 09:53:01.087840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:22.232 [2024-11-28 09:53:01.087850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.087870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.087877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.232 [2024-11-28 09:53:01.087884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:22.232 [2024-11-28 09:53:01.087891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.087909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:22.232 [2024-11-28 09:53:01.090993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.091015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.232 [2024-11-28 09:53:01.091024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:20:22.232 [2024-11-28 09:53:01.091030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.091060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.091067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.232 [2024-11-28 09:53:01.091077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.232 [2024-11-28 09:53:01.091083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.091101] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.232 [2024-11-28 09:53:01.091116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:22.232 [2024-11-28 09:53:01.091149] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.232 [2024-11-28 09:53:01.091174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:22.232 [2024-11-28 09:53:01.091259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:22.232 [2024-11-28 09:53:01.091268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.232 [2024-11-28 09:53:01.091282] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:22.232 [2024-11-28 09:53:01.091290] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.232 [2024-11-28 09:53:01.091298] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.232 [2024-11-28 09:53:01.091305] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:22.232 [2024-11-28 09:53:01.091314] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.232 [2024-11-28 09:53:01.091319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:22.232 [2024-11-28 09:53:01.091328] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:22.232 [2024-11-28 09:53:01.091334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.091342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.232 [2024-11-28 09:53:01.091348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:22.232 [2024-11-28 09:53:01.091358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.091424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.232 [2024-11-28 09:53:01.091432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.232 [2024-11-28 09:53:01.091439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:22.232 [2024-11-28 09:53:01.091447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.232 [2024-11-28 09:53:01.091534] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.232 [2024-11-28 09:53:01.091544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.232 [2024-11-28 09:53:01.091551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.232 [2024-11-28 09:53:01.091559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.232 [2024-11-28 09:53:01.091565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.232 [2024-11-28 09:53:01.091572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.232 [2024-11-28 09:53:01.091577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:22.232 [2024-11-28 09:53:01.091589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.233 [2024-11-28 09:53:01.091596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.233 [2024-11-28 09:53:01.091609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.233 [2024-11-28 09:53:01.091616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:22.233 [2024-11-28 09:53:01.091622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.233 [2024-11-28 09:53:01.091629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.233 [2024-11-28 09:53:01.091635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:22.233 [2024-11-28 09:53:01.091642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.233 [2024-11-28 09:53:01.091654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.233 [2024-11-28 09:53:01.091676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.233 [2024-11-28 09:53:01.091695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.233 [2024-11-28 09:53:01.091711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.233 [2024-11-28 09:53:01.091729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.233 [2024-11-28 09:53:01.091745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.233 [2024-11-28 09:53:01.091761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.233 [2024-11-28 09:53:01.091768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:22.233 [2024-11-28 09:53:01.091773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.233 [2024-11-28 09:53:01.091780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:22.233 [2024-11-28 09:53:01.091785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:22.233 [2024-11-28 09:53:01.091792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:22.233 [2024-11-28 09:53:01.091805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:22.233 [2024-11-28 09:53:01.091810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091817] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.233 [2024-11-28 09:53:01.091825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.233 [2024-11-28 09:53:01.091832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.233 [2024-11-28 09:53:01.091845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.233 [2024-11-28 09:53:01.091850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.233 [2024-11-28 09:53:01.091857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.233 [2024-11-28 09:53:01.091863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.233 [2024-11-28 09:53:01.091870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.233 [2024-11-28 09:53:01.091875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.233 [2024-11-28 09:53:01.091883] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.233 [2024-11-28 09:53:01.091890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.091900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:22.233 [2024-11-28 09:53:01.091906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:22.233 [2024-11-28 09:53:01.091914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:22.233 [2024-11-28 09:53:01.091919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:22.233 [2024-11-28 09:53:01.091926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:22.233 [2024-11-28 09:53:01.091932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:22.233 [2024-11-28 09:53:01.091938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:22.233 [2024-11-28 09:53:01.091944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:22.233 [2024-11-28 09:53:01.091952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:22.233 [2024-11-28 09:53:01.091958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.091965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.091971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.091978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.091984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:22.233 [2024-11-28 09:53:01.091991] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.233 [2024-11-28 09:53:01.091998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.092007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.233 [2024-11-28 09:53:01.092012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.233 [2024-11-28 09:53:01.092020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.233 [2024-11-28 09:53:01.092026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.233 [2024-11-28 09:53:01.092033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.233 [2024-11-28 09:53:01.092039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.233 [2024-11-28 09:53:01.092047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:22.233 [2024-11-28 09:53:01.092054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.116481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.116613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.496 [2024-11-28 09:53:01.116660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.369 ms 00:20:22.496 [2024-11-28 09:53:01.116680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.116788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.116808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.496 [2024-11-28 09:53:01.116826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:22.496 [2024-11-28 09:53:01.116840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.143249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.143362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.496 [2024-11-28 09:53:01.143410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.332 ms 00:20:22.496 [2024-11-28 09:53:01.143428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.143488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.143508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.496 [2024-11-28 09:53:01.143526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:22.496 [2024-11-28 09:53:01.143541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.143934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.143974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.496 [2024-11-28 09:53:01.143995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:20:22.496 [2024-11-28 09:53:01.144088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.144219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.144279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.496 [2024-11-28 09:53:01.144300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:22.496 [2024-11-28 09:53:01.144315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.157855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.157952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.496 [2024-11-28 09:53:01.157996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.479 ms 00:20:22.496 [2024-11-28 09:53:01.158014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.186138] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:22.496 [2024-11-28 09:53:01.186176] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.496 [2024-11-28 09:53:01.186192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.186199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.496 [2024-11-28 09:53:01.186208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.766 ms 00:20:22.496 [2024-11-28 09:53:01.186218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.205144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.205176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.496 [2024-11-28 09:53:01.205188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.867 ms 00:20:22.496 [2024-11-28 09:53:01.205194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.214676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.214711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.496 [2024-11-28 09:53:01.214723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.423 ms 00:20:22.496 [2024-11-28 09:53:01.214728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.223805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.223828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.496 [2024-11-28 09:53:01.223837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.034 ms 00:20:22.496 [2024-11-28 09:53:01.223843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.224350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.224366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.496 [2024-11-28 09:53:01.224375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:20:22.496 [2024-11-28 09:53:01.224381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.272885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.272918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.496 [2024-11-28 09:53:01.272930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.483 ms 00:20:22.496 [2024-11-28 09:53:01.272937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.280925] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.496 [2024-11-28 09:53:01.295523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.295686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.496 [2024-11-28 09:53:01.295701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.511 ms 00:20:22.496 [2024-11-28 09:53:01.295709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.295776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.295786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.496 [2024-11-28 09:53:01.295793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.496 [2024-11-28 09:53:01.295801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.295847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.295856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.496 [2024-11-28 09:53:01.295862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:22.496 [2024-11-28 09:53:01.295873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.295893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.295901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.496 [2024-11-28 09:53:01.295907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:22.496 [2024-11-28 09:53:01.295916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.295944] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.496 [2024-11-28 09:53:01.295956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.295965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.496 [2024-11-28 09:53:01.295973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:22.496 [2024-11-28 09:53:01.295980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.315392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.315418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.496 [2024-11-28 09:53:01.315429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.393 ms 00:20:22.496 [2024-11-28 09:53:01.315435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.496 [2024-11-28 09:53:01.315512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.496 [2024-11-28 09:53:01.315520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.496 [2024-11-28 09:53:01.315531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:22.496 [2024-11-28 09:53:01.315537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.497 [2024-11-28 09:53:01.316325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.497 [2024-11-28 09:53:01.318684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 249.388 ms, result 0 00:20:22.497 [2024-11-28 09:53:01.320378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.497 Some configs were skipped because the RPC state that can call them passed over. 00:20:22.497 09:53:01 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:22.758 [2024-11-28 09:53:01.541970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.758 [2024-11-28 09:53:01.542101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:22.758 [2024-11-28 09:53:01.542147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:20:22.758 [2024-11-28 09:53:01.542178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.758 [2024-11-28 09:53:01.542217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.095 ms, result 0 00:20:22.758 true 00:20:22.758 09:53:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:23.018 [2024-11-28 09:53:01.737826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.018 [2024-11-28 09:53:01.737937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:23.018 [2024-11-28 09:53:01.737979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.507 ms 00:20:23.018 [2024-11-28 09:53:01.737997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.018 [2024-11-28 09:53:01.738036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.718 ms, result 0 00:20:23.018 true 00:20:23.018 09:53:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76966 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76966 ']' 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76966 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76966 00:20:23.018 killing process with pid 76966 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76966' 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76966 00:20:23.018 09:53:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76966 00:20:23.590 [2024-11-28 09:53:02.341511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.590 [2024-11-28 09:53:02.341561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:23.590 [2024-11-28 09:53:02.341573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:23.590 [2024-11-28 09:53:02.341581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.590 [2024-11-28 09:53:02.341601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:23.590 [2024-11-28 09:53:02.343730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.590 [2024-11-28 09:53:02.343755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:23.590 [2024-11-28 09:53:02.343768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:20:23.590 [2024-11-28 09:53:02.343774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.590 [2024-11-28 09:53:02.344025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.344035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:23.591 [2024-11-28 09:53:02.344043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:23.591 [2024-11-28 09:53:02.344049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.347689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.347856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:23.591 [2024-11-28 09:53:02.347873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:20:23.591 [2024-11-28 09:53:02.347880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.353228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.353252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:23.591 [2024-11-28 09:53:02.353261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:20:23.591 [2024-11-28 09:53:02.353268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.361749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.361778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:23.591 [2024-11-28 09:53:02.361790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.411 ms 00:20:23.591 [2024-11-28 09:53:02.361796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.368986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.369015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:23.591 [2024-11-28 09:53:02.369025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.156 ms 00:20:23.591 [2024-11-28 09:53:02.369032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.369147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.369167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:23.591 [2024-11-28 09:53:02.369176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:23.591 [2024-11-28 09:53:02.369182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.377869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.377893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:23.591 [2024-11-28 09:53:02.377902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.670 ms 00:20:23.591 [2024-11-28 09:53:02.377908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.386192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.386214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:23.591 [2024-11-28 09:53:02.386225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.253 ms 00:20:23.591 [2024-11-28 09:53:02.386231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.393718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.393741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:23.591 [2024-11-28 09:53:02.393749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.446 ms 00:20:23.591 [2024-11-28 09:53:02.393755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.401394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.401416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:23.591 [2024-11-28 09:53:02.401425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.586 ms 00:20:23.591 [2024-11-28 09:53:02.401430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.401466] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:23.591 [2024-11-28 09:53:02.401477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.401994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:23.591 [2024-11-28 09:53:02.402175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:23.591 [2024-11-28 09:53:02.402185] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:20:23.591 [2024-11-28 09:53:02.402195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:23.591 [2024-11-28 09:53:02.402202] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:23.591 [2024-11-28 09:53:02.402208] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:23.591 [2024-11-28 09:53:02.402217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:23.591 [2024-11-28 09:53:02.402224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:23.591 [2024-11-28 09:53:02.402233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:23.591 [2024-11-28 09:53:02.402239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:23.591 [2024-11-28 09:53:02.402246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:23.591 [2024-11-28 09:53:02.402251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:23.591 [2024-11-28 09:53:02.402257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.402264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:23.591 [2024-11-28 09:53:02.402273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:20:23.591 [2024-11-28 09:53:02.402292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.412658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.412784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:23.591 [2024-11-28 09:53:02.412801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.348 ms 00:20:23.591 [2024-11-28 09:53:02.412808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.413120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.591 [2024-11-28 09:53:02.413130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:23.591 [2024-11-28 09:53:02.413140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:23.591 [2024-11-28 09:53:02.413145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.591 [2024-11-28 09:53:02.449916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.591 [2024-11-28 09:53:02.449942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.591 [2024-11-28 09:53:02.449952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.592 [2024-11-28 09:53:02.449958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.592 [2024-11-28 09:53:02.450043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.592 [2024-11-28 09:53:02.450051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.592 [2024-11-28 09:53:02.450060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.592 [2024-11-28 09:53:02.450066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.592 [2024-11-28 09:53:02.450103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.592 [2024-11-28 09:53:02.450112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.592 [2024-11-28 09:53:02.450122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.592 [2024-11-28 09:53:02.450128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.592 [2024-11-28 09:53:02.450143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.592 [2024-11-28 09:53:02.450149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.592 [2024-11-28 09:53:02.450170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.592 [2024-11-28 09:53:02.450178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.853 [2024-11-28 09:53:02.513713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.853 [2024-11-28 09:53:02.513742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.853 [2024-11-28 09:53:02.513753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.853 [2024-11-28 09:53:02.513759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.853 [2024-11-28 09:53:02.565477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.853 [2024-11-28 09:53:02.565507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.853 [2024-11-28 09:53:02.565520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.853 [2024-11-28 09:53:02.565527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.853 [2024-11-28 09:53:02.565600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.853 [2024-11-28 09:53:02.565608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.853 [2024-11-28 09:53:02.565619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.853 [2024-11-28 09:53:02.565625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.853 [2024-11-28 09:53:02.565654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.853 [2024-11-28 09:53:02.565661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.853 [2024-11-28 09:53:02.565669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.853 [2024-11-28 09:53:02.565675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.853 [2024-11-28 09:53:02.565756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.853 [2024-11-28 09:53:02.565764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.853 [2024-11-28 09:53:02.565772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.853 [2024-11-28 09:53:02.565777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.854 [2024-11-28 09:53:02.565810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.854 [2024-11-28 09:53:02.565817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:23.854 [2024-11-28 09:53:02.565825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.854 [2024-11-28 09:53:02.565831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.854 [2024-11-28 09:53:02.565871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.854 [2024-11-28 09:53:02.565879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.854 [2024-11-28 09:53:02.565889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.854 [2024-11-28 09:53:02.565895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.854 [2024-11-28 09:53:02.565938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.854 [2024-11-28 09:53:02.565946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.854 [2024-11-28 09:53:02.565954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.854 [2024-11-28 09:53:02.565961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.854 [2024-11-28 09:53:02.566087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 224.555 ms, result 0 00:20:24.424 09:53:03 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:24.424 [2024-11-28 09:53:03.194027] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:24.424 [2024-11-28 09:53:03.194306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77013 ] 00:20:24.684 [2024-11-28 09:53:03.348339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.684 [2024-11-28 09:53:03.434327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.945 [2024-11-28 09:53:03.666135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.945 [2024-11-28 09:53:03.666203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.945 [2024-11-28 09:53:03.822438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.945 [2024-11-28 09:53:03.822474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.945 [2024-11-28 09:53:03.822486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.945 [2024-11-28 09:53:03.822492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.208 [2024-11-28 09:53:03.824909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.208 [2024-11-28 09:53:03.824944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.208 [2024-11-28 09:53:03.824953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:20:25.208 [2024-11-28 09:53:03.824960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.208 [2024-11-28 09:53:03.825037] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.208 [2024-11-28 09:53:03.825609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.208 [2024-11-28 09:53:03.825634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.208 [2024-11-28 09:53:03.825641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.208 [2024-11-28 09:53:03.825649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:20:25.208 [2024-11-28 09:53:03.825656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.208 [2024-11-28 09:53:03.827036] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:25.208 [2024-11-28 09:53:03.837469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.208 [2024-11-28 09:53:03.837497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:25.208 [2024-11-28 09:53:03.837506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.434 ms 00:20:25.208 [2024-11-28 09:53:03.837513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.208 [2024-11-28 09:53:03.837585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.208 [2024-11-28 09:53:03.837595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:25.208 [2024-11-28 09:53:03.837602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:25.208 [2024-11-28 09:53:03.837608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.208 [2024-11-28 09:53:03.843847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.843871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.209 [2024-11-28 09:53:03.843878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:20:25.209 [2024-11-28 09:53:03.843884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.843961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.843968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.209 [2024-11-28 09:53:03.843975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:25.209 [2024-11-28 09:53:03.843981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.844000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.844007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.209 [2024-11-28 09:53:03.844013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:25.209 [2024-11-28 09:53:03.844019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.844038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:25.209 [2024-11-28 09:53:03.847082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.847104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.209 [2024-11-28 09:53:03.847112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:20:25.209 [2024-11-28 09:53:03.847118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.847148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.847167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.209 [2024-11-28 09:53:03.847173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:25.209 [2024-11-28 09:53:03.847180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.847197] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:25.209 [2024-11-28 09:53:03.847213] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:25.209 [2024-11-28 09:53:03.847241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:25.209 [2024-11-28 09:53:03.847253] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:25.209 [2024-11-28 09:53:03.847335] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.209 [2024-11-28 09:53:03.847345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.209 [2024-11-28 09:53:03.847353] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:25.209 [2024-11-28 09:53:03.847364] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847377] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:25.209 [2024-11-28 09:53:03.847383] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.209 [2024-11-28 09:53:03.847389] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.209 [2024-11-28 09:53:03.847395] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.209 [2024-11-28 09:53:03.847401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.847407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.209 [2024-11-28 09:53:03.847414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:25.209 [2024-11-28 09:53:03.847420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.847486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.209 [2024-11-28 09:53:03.847496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.209 [2024-11-28 09:53:03.847502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:25.209 [2024-11-28 09:53:03.847507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.209 [2024-11-28 09:53:03.847584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.209 [2024-11-28 09:53:03.847592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.209 [2024-11-28 09:53:03.847599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.209 [2024-11-28 09:53:03.847617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.209 [2024-11-28 09:53:03.847634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.209 [2024-11-28 09:53:03.847644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.209 [2024-11-28 09:53:03.847655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:25.209 [2024-11-28 09:53:03.847662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.209 [2024-11-28 09:53:03.847667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.209 [2024-11-28 09:53:03.847672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:25.209 [2024-11-28 09:53:03.847677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.209 [2024-11-28 09:53:03.847688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.209 [2024-11-28 09:53:03.847704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.209 [2024-11-28 09:53:03.847719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.209 [2024-11-28 09:53:03.847735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.209 [2024-11-28 09:53:03.847750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.209 [2024-11-28 09:53:03.847766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.209 [2024-11-28 09:53:03.847777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.209 [2024-11-28 09:53:03.847782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:25.209 [2024-11-28 09:53:03.847788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.209 [2024-11-28 09:53:03.847793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.209 [2024-11-28 09:53:03.847798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:25.209 [2024-11-28 09:53:03.847803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.209 [2024-11-28 09:53:03.847813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:25.209 [2024-11-28 09:53:03.847819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847824] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.209 [2024-11-28 09:53:03.847831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.209 [2024-11-28 09:53:03.847839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.209 [2024-11-28 09:53:03.847850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.209 [2024-11-28 09:53:03.847855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.209 [2024-11-28 09:53:03.847861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.209 [2024-11-28 09:53:03.847867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.209 [2024-11-28 09:53:03.847872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.209 [2024-11-28 09:53:03.847877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.209 [2024-11-28 09:53:03.847885] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.209 [2024-11-28 09:53:03.847892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.209 [2024-11-28 09:53:03.847899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:25.209 [2024-11-28 09:53:03.847905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:25.209 [2024-11-28 09:53:03.847910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:25.209 [2024-11-28 09:53:03.847917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:25.209 [2024-11-28 09:53:03.847923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:25.210 [2024-11-28 09:53:03.847929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:25.210 [2024-11-28 09:53:03.847937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:25.210 [2024-11-28 09:53:03.847942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:25.210 [2024-11-28 09:53:03.847948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:25.210 [2024-11-28 09:53:03.847954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:25.210 [2024-11-28 09:53:03.847960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:25.210 [2024-11-28 09:53:03.847966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:25.210 [2024-11-28 09:53:03.847972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:25.210 [2024-11-28 09:53:03.847978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:25.210 [2024-11-28 09:53:03.847984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.210 [2024-11-28 09:53:03.847990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.210 [2024-11-28 09:53:03.847998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.210 [2024-11-28 09:53:03.848004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.210 [2024-11-28 09:53:03.848010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.210 [2024-11-28 09:53:03.848016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.210 [2024-11-28 09:53:03.848021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.848030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.210 [2024-11-28 09:53:03.848037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:20:25.210 [2024-11-28 09:53:03.848042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.872236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.872263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.210 [2024-11-28 09:53:03.872272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.136 ms 00:20:25.210 [2024-11-28 09:53:03.872278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.872375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.872383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:25.210 [2024-11-28 09:53:03.872390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:25.210 [2024-11-28 09:53:03.872395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.912100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.912133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.210 [2024-11-28 09:53:03.912145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.688 ms 00:20:25.210 [2024-11-28 09:53:03.912163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.912226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.912235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.210 [2024-11-28 09:53:03.912242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:25.210 [2024-11-28 09:53:03.912248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.912630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.912650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.210 [2024-11-28 09:53:03.912658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:20:25.210 [2024-11-28 09:53:03.912667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.912785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.912799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.210 [2024-11-28 09:53:03.912806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:25.210 [2024-11-28 09:53:03.912814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.925113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.925140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.210 [2024-11-28 09:53:03.925148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.284 ms 00:20:25.210 [2024-11-28 09:53:03.925168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.935859] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:25.210 [2024-11-28 09:53:03.935887] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:25.210 [2024-11-28 09:53:03.935897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.935904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:25.210 [2024-11-28 09:53:03.935910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.634 ms 00:20:25.210 [2024-11-28 09:53:03.935916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.954369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.954524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:25.210 [2024-11-28 09:53:03.954538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.396 ms 00:20:25.210 [2024-11-28 09:53:03.954545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.963647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.963672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:25.210 [2024-11-28 09:53:03.963680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.049 ms 00:20:25.210 [2024-11-28 09:53:03.963686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.972652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.972677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:25.210 [2024-11-28 09:53:03.972685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.923 ms 00:20:25.210 [2024-11-28 09:53:03.972691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:03.973149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:03.973175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:25.210 [2024-11-28 09:53:03.973183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:25.210 [2024-11-28 09:53:03.973189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.022343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.022378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:25.210 [2024-11-28 09:53:04.022389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.135 ms 00:20:25.210 [2024-11-28 09:53:04.022397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.030486] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:25.210 [2024-11-28 09:53:04.044689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.044845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:25.210 [2024-11-28 09:53:04.044859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.234 ms 00:20:25.210 [2024-11-28 09:53:04.044870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.044951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.044960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:25.210 [2024-11-28 09:53:04.044968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:25.210 [2024-11-28 09:53:04.044974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.045017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.045025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:25.210 [2024-11-28 09:53:04.045032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:25.210 [2024-11-28 09:53:04.045041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.045069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.045077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:25.210 [2024-11-28 09:53:04.045083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:25.210 [2024-11-28 09:53:04.045089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.045118] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:25.210 [2024-11-28 09:53:04.045126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.045132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:25.210 [2024-11-28 09:53:04.045138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:25.210 [2024-11-28 09:53:04.045144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.063861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.063950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:25.210 [2024-11-28 09:53:04.063963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.679 ms 00:20:25.210 [2024-11-28 09:53:04.063970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.064042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.210 [2024-11-28 09:53:04.064051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:25.210 [2024-11-28 09:53:04.064058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:25.210 [2024-11-28 09:53:04.064065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.210 [2024-11-28 09:53:04.066314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.210 [2024-11-28 09:53:04.075638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.015 ms, result 0 00:20:25.211 [2024-11-28 09:53:04.077643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:25.472 [2024-11-28 09:53:04.090569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:26.418  [2024-11-28T09:53:06.244Z] Copying: 13/256 [MB] (13 MBps) [2024-11-28T09:53:07.188Z] Copying: 24/256 [MB] (10 MBps) [2024-11-28T09:53:08.577Z] Copying: 36/256 [MB] (11 MBps) [2024-11-28T09:53:09.150Z] Copying: 46/256 [MB] (10 MBps) [2024-11-28T09:53:10.585Z] Copying: 58/256 [MB] (11 MBps) [2024-11-28T09:53:11.182Z] Copying: 69/256 [MB] (11 MBps) [2024-11-28T09:53:12.571Z] Copying: 81/256 [MB] (11 MBps) [2024-11-28T09:53:13.144Z] Copying: 91/256 [MB] (10 MBps) [2024-11-28T09:53:14.533Z] Copying: 103/256 [MB] (11 MBps) [2024-11-28T09:53:15.476Z] Copying: 113/256 [MB] (10 MBps) [2024-11-28T09:53:16.421Z] Copying: 125/256 [MB] (11 MBps) [2024-11-28T09:53:17.367Z] Copying: 136/256 [MB] (11 MBps) [2024-11-28T09:53:18.313Z] Copying: 148/256 [MB] (11 MBps) [2024-11-28T09:53:19.258Z] Copying: 160/256 [MB] (11 MBps) [2024-11-28T09:53:20.202Z] Copying: 172/256 [MB] (11 MBps) [2024-11-28T09:53:21.149Z] Copying: 183/256 [MB] (11 MBps) [2024-11-28T09:53:22.538Z] Copying: 194/256 [MB] (11 MBps) [2024-11-28T09:53:23.484Z] Copying: 209472/262144 [kB] (10096 kBps) [2024-11-28T09:53:24.430Z] Copying: 214/256 [MB] (10 MBps) [2024-11-28T09:53:25.373Z] Copying: 225/256 [MB] (11 MBps) [2024-11-28T09:53:26.316Z] Copying: 237/256 [MB] (11 MBps) [2024-11-28T09:53:26.887Z] Copying: 248/256 [MB] (11 MBps) [2024-11-28T09:53:27.149Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-28 09:53:27.107546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.269 [2024-11-28 09:53:27.120072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.269 [2024-11-28 09:53:27.120306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.269 [2024-11-28 09:53:27.121061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:48.269 [2024-11-28 09:53:27.121409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.269 [2024-11-28 09:53:27.121562] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:48.269 [2024-11-28 09:53:27.129978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.269 [2024-11-28 09:53:27.130061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.269 [2024-11-28 09:53:27.130092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.323 ms 00:20:48.269 [2024-11-28 09:53:27.130116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.269 [2024-11-28 09:53:27.131034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.269 [2024-11-28 09:53:27.131100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.269 [2024-11-28 09:53:27.131129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:20:48.269 [2024-11-28 09:53:27.131176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.269 [2024-11-28 09:53:27.139968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.269 [2024-11-28 09:53:27.140002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.269 [2024-11-28 09:53:27.140014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.734 ms 00:20:48.269 [2024-11-28 09:53:27.140024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.269 [2024-11-28 09:53:27.147446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.269 [2024-11-28 09:53:27.147488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.269 [2024-11-28 09:53:27.147500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.391 ms 00:20:48.269 [2024-11-28 09:53:27.147509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.173437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.173647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.533 [2024-11-28 09:53:27.173670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.842 ms 00:20:48.533 [2024-11-28 09:53:27.173679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.190691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.190743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.533 [2024-11-28 09:53:27.190766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.747 ms 00:20:48.533 [2024-11-28 09:53:27.190775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.190949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.190964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.533 [2024-11-28 09:53:27.190988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:48.533 [2024-11-28 09:53:27.190999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.217568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.217626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:48.533 [2024-11-28 09:53:27.217639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.550 ms 00:20:48.533 [2024-11-28 09:53:27.217647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.243352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.243398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:48.533 [2024-11-28 09:53:27.243410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.637 ms 00:20:48.533 [2024-11-28 09:53:27.243418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.268742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.268948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.533 [2024-11-28 09:53:27.268971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.273 ms 00:20:48.533 [2024-11-28 09:53:27.268979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.301207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.533 [2024-11-28 09:53:27.301264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.533 [2024-11-28 09:53:27.301279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.621 ms 00:20:48.533 [2024-11-28 09:53:27.301288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.533 [2024-11-28 09:53:27.301338] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.533 [2024-11-28 09:53:27.301370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.533 [2024-11-28 09:53:27.301594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.301997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.534 [2024-11-28 09:53:27.302242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.534 [2024-11-28 09:53:27.302251] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a94f5e4-bdc1-43c4-93bd-1d0e31f9ca15 00:20:48.534 [2024-11-28 09:53:27.302260] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.534 [2024-11-28 09:53:27.302268] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.534 [2024-11-28 09:53:27.302275] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.534 [2024-11-28 09:53:27.302284] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.534 [2024-11-28 09:53:27.302293] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.534 [2024-11-28 09:53:27.302302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.534 [2024-11-28 09:53:27.302313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.534 [2024-11-28 09:53:27.302321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.534 [2024-11-28 09:53:27.302328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.534 [2024-11-28 09:53:27.302337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.534 [2024-11-28 09:53:27.302345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.534 [2024-11-28 09:53:27.302356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:20:48.534 [2024-11-28 09:53:27.302364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.534 [2024-11-28 09:53:27.316843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.534 [2024-11-28 09:53:27.316888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.534 [2024-11-28 09:53:27.316900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.445 ms 00:20:48.534 [2024-11-28 09:53:27.316909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.534 [2024-11-28 09:53:27.317450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.534 [2024-11-28 09:53:27.317468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.534 [2024-11-28 09:53:27.317480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:20:48.535 [2024-11-28 09:53:27.317490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.535 [2024-11-28 09:53:27.359420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.535 [2024-11-28 09:53:27.359648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.535 [2024-11-28 09:53:27.359670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.535 [2024-11-28 09:53:27.359687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.535 [2024-11-28 09:53:27.359775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.535 [2024-11-28 09:53:27.359787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.535 [2024-11-28 09:53:27.359796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.535 [2024-11-28 09:53:27.359805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.535 [2024-11-28 09:53:27.359861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.535 [2024-11-28 09:53:27.359873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.535 [2024-11-28 09:53:27.359883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.535 [2024-11-28 09:53:27.359891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.535 [2024-11-28 09:53:27.359915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.535 [2024-11-28 09:53:27.359925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.535 [2024-11-28 09:53:27.359935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.535 [2024-11-28 09:53:27.359945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.796 [2024-11-28 09:53:27.450783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.796 [2024-11-28 09:53:27.451023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.796 [2024-11-28 09:53:27.451045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.796 [2024-11-28 09:53:27.451055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.796 [2024-11-28 09:53:27.524858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.796 [2024-11-28 09:53:27.524924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.796 [2024-11-28 09:53:27.524939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.524949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.797 [2024-11-28 09:53:27.525030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.797 [2024-11-28 09:53:27.525040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.525050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.797 [2024-11-28 09:53:27.525103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.797 [2024-11-28 09:53:27.525112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.525122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.797 [2024-11-28 09:53:27.525277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.797 [2024-11-28 09:53:27.525287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.525296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.797 [2024-11-28 09:53:27.525368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.797 [2024-11-28 09:53:27.525383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.525392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.797 [2024-11-28 09:53:27.525461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.797 [2024-11-28 09:53:27.525470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.525478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.797 [2024-11-28 09:53:27.525555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.797 [2024-11-28 09:53:27.525565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.797 [2024-11-28 09:53:27.525575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.797 [2024-11-28 09:53:27.525763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.683 ms, result 0 00:20:49.742 00:20:49.742 00:20:49.743 09:53:28 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.316 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:50.316 09:53:28 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:50.316 09:53:28 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:50.316 09:53:28 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:50.316 09:53:28 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:50.316 09:53:28 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:50.316 09:53:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:50.316 09:53:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76966 00:20:50.316 09:53:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76966 ']' 00:20:50.316 09:53:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76966 00:20:50.317 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76966) - No such process 00:20:50.317 Process with pid 76966 is not found 00:20:50.317 09:53:29 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76966 is not found' 00:20:50.317 00:20:50.317 real 1m30.450s 00:20:50.317 user 1m51.989s 00:20:50.317 sys 0m5.742s 00:20:50.317 ************************************ 00:20:50.317 END TEST ftl_trim 00:20:50.317 ************************************ 00:20:50.317 09:53:29 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.317 09:53:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:50.317 09:53:29 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:50.317 09:53:29 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:50.317 09:53:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.317 09:53:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:50.317 ************************************ 00:20:50.317 START TEST ftl_restore 00:20:50.317 ************************************ 00:20:50.317 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:50.317 * Looking for test storage... 00:20:50.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.317 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:50.317 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.579 09:53:29 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.579 --rc genhtml_branch_coverage=1 00:20:50.579 --rc genhtml_function_coverage=1 00:20:50.579 --rc genhtml_legend=1 00:20:50.579 --rc geninfo_all_blocks=1 00:20:50.579 --rc geninfo_unexecuted_blocks=1 00:20:50.579 00:20:50.579 ' 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.579 --rc genhtml_branch_coverage=1 00:20:50.579 --rc genhtml_function_coverage=1 00:20:50.579 --rc genhtml_legend=1 00:20:50.579 --rc geninfo_all_blocks=1 00:20:50.579 --rc geninfo_unexecuted_blocks=1 00:20:50.579 00:20:50.579 ' 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.579 --rc genhtml_branch_coverage=1 00:20:50.579 --rc genhtml_function_coverage=1 00:20:50.579 --rc genhtml_legend=1 00:20:50.579 --rc geninfo_all_blocks=1 00:20:50.579 --rc geninfo_unexecuted_blocks=1 00:20:50.579 00:20:50.579 ' 00:20:50.579 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:50.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.579 --rc genhtml_branch_coverage=1 00:20:50.579 --rc genhtml_function_coverage=1 00:20:50.579 --rc genhtml_legend=1 00:20:50.579 --rc geninfo_all_blocks=1 00:20:50.579 --rc geninfo_unexecuted_blocks=1 00:20:50.579 00:20:50.579 ' 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:50.579 09:53:29 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vxHbJM2DX6 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77351 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77351 00:20:50.580 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77351 ']' 00:20:50.580 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.580 09:53:29 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.580 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.580 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.580 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.580 09:53:29 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:50.580 [2024-11-28 09:53:29.383715] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:50.580 [2024-11-28 09:53:29.384190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77351 ] 00:20:50.841 [2024-11-28 09:53:29.549399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.841 [2024-11-28 09:53:29.694646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.784 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.784 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:51.784 09:53:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:51.784 09:53:30 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:51.784 09:53:30 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:51.784 09:53:30 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:51.784 09:53:30 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:51.784 09:53:30 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:52.046 09:53:30 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:52.046 09:53:30 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:52.046 09:53:30 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:52.046 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:52.046 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:52.046 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:52.046 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:52.046 09:53:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:52.308 { 00:20:52.308 "name": "nvme0n1", 00:20:52.308 "aliases": [ 00:20:52.308 "df164cba-8008-47cf-8d67-fe0530e9d241" 00:20:52.308 ], 00:20:52.308 "product_name": "NVMe disk", 00:20:52.308 "block_size": 4096, 00:20:52.308 "num_blocks": 1310720, 00:20:52.308 "uuid": "df164cba-8008-47cf-8d67-fe0530e9d241", 00:20:52.308 "numa_id": -1, 00:20:52.308 "assigned_rate_limits": { 00:20:52.308 "rw_ios_per_sec": 0, 00:20:52.308 "rw_mbytes_per_sec": 0, 00:20:52.308 "r_mbytes_per_sec": 0, 00:20:52.308 "w_mbytes_per_sec": 0 00:20:52.308 }, 00:20:52.308 "claimed": true, 00:20:52.308 "claim_type": "read_many_write_one", 00:20:52.308 "zoned": false, 00:20:52.308 "supported_io_types": { 00:20:52.308 "read": true, 00:20:52.308 "write": true, 00:20:52.308 "unmap": true, 00:20:52.308 "flush": true, 00:20:52.308 "reset": true, 00:20:52.308 "nvme_admin": true, 00:20:52.308 "nvme_io": true, 00:20:52.308 "nvme_io_md": false, 00:20:52.308 "write_zeroes": true, 00:20:52.308 "zcopy": false, 00:20:52.308 "get_zone_info": false, 00:20:52.308 "zone_management": false, 00:20:52.308 "zone_append": false, 00:20:52.308 "compare": true, 00:20:52.308 "compare_and_write": false, 00:20:52.308 "abort": true, 00:20:52.308 "seek_hole": false, 00:20:52.308 "seek_data": false, 00:20:52.308 "copy": true, 00:20:52.308 "nvme_iov_md": false 00:20:52.308 }, 00:20:52.308 "driver_specific": { 00:20:52.308 "nvme": [ 00:20:52.308 { 00:20:52.308 "pci_address": "0000:00:11.0", 00:20:52.308 "trid": { 00:20:52.308 "trtype": "PCIe", 00:20:52.308 "traddr": "0000:00:11.0" 00:20:52.308 }, 00:20:52.308 "ctrlr_data": { 00:20:52.308 "cntlid": 0, 00:20:52.308 "vendor_id": "0x1b36", 00:20:52.308 "model_number": "QEMU NVMe Ctrl", 00:20:52.308 "serial_number": "12341", 00:20:52.308 "firmware_revision": "8.0.0", 00:20:52.308 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:52.308 "oacs": { 00:20:52.308 "security": 0, 00:20:52.308 "format": 1, 00:20:52.308 "firmware": 0, 00:20:52.308 "ns_manage": 1 00:20:52.308 }, 00:20:52.308 "multi_ctrlr": false, 00:20:52.308 "ana_reporting": false 00:20:52.308 }, 00:20:52.308 "vs": { 00:20:52.308 "nvme_version": "1.4" 00:20:52.308 }, 00:20:52.308 "ns_data": { 00:20:52.308 "id": 1, 00:20:52.308 "can_share": false 00:20:52.308 } 00:20:52.308 } 00:20:52.308 ], 00:20:52.308 "mp_policy": "active_passive" 00:20:52.308 } 00:20:52.308 } 00:20:52.308 ]' 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:52.308 09:53:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:52.308 09:53:31 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:52.308 09:53:31 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:52.308 09:53:31 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:52.308 09:53:31 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:52.308 09:53:31 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:52.570 09:53:31 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f768c68a-ba69-4561-bc6d-4e39a54e3184 00:20:52.570 09:53:31 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:52.570 09:53:31 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f768c68a-ba69-4561-bc6d-4e39a54e3184 00:20:52.832 09:53:31 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:53.093 09:53:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fb2aac14-ead1-4b96-811f-424ffc26b932 00:20:53.093 09:53:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fb2aac14-ead1-4b96-811f-424ffc26b932 00:20:53.353 09:53:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.353 09:53:32 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:53.354 09:53:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.354 09:53:32 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:53.354 09:53:32 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:53.354 09:53:32 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.354 09:53:32 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:53.354 09:53:32 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:53.354 { 00:20:53.354 "name": "0dc61620-61b2-4ac0-971e-77fe1ea2f535", 00:20:53.354 "aliases": [ 00:20:53.354 "lvs/nvme0n1p0" 00:20:53.354 ], 00:20:53.354 "product_name": "Logical Volume", 00:20:53.354 "block_size": 4096, 00:20:53.354 "num_blocks": 26476544, 00:20:53.354 "uuid": "0dc61620-61b2-4ac0-971e-77fe1ea2f535", 00:20:53.354 "assigned_rate_limits": { 00:20:53.354 "rw_ios_per_sec": 0, 00:20:53.354 "rw_mbytes_per_sec": 0, 00:20:53.354 "r_mbytes_per_sec": 0, 00:20:53.354 "w_mbytes_per_sec": 0 00:20:53.354 }, 00:20:53.354 "claimed": false, 00:20:53.354 "zoned": false, 00:20:53.354 "supported_io_types": { 00:20:53.354 "read": true, 00:20:53.354 "write": true, 00:20:53.354 "unmap": true, 00:20:53.354 "flush": false, 00:20:53.354 "reset": true, 00:20:53.354 "nvme_admin": false, 00:20:53.354 "nvme_io": false, 00:20:53.354 "nvme_io_md": false, 00:20:53.354 "write_zeroes": true, 00:20:53.354 "zcopy": false, 00:20:53.354 "get_zone_info": false, 00:20:53.354 "zone_management": false, 00:20:53.354 "zone_append": false, 00:20:53.354 "compare": false, 00:20:53.354 "compare_and_write": false, 00:20:53.354 "abort": false, 00:20:53.354 "seek_hole": true, 00:20:53.354 "seek_data": true, 00:20:53.354 "copy": false, 00:20:53.354 "nvme_iov_md": false 00:20:53.354 }, 00:20:53.354 "driver_specific": { 00:20:53.354 "lvol": { 00:20:53.354 "lvol_store_uuid": "fb2aac14-ead1-4b96-811f-424ffc26b932", 00:20:53.354 "base_bdev": "nvme0n1", 00:20:53.354 "thin_provision": true, 00:20:53.354 "num_allocated_clusters": 0, 00:20:53.354 "snapshot": false, 00:20:53.354 "clone": false, 00:20:53.354 "esnap_clone": false 00:20:53.354 } 00:20:53.354 } 00:20:53.354 } 00:20:53.354 ]' 00:20:53.354 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:53.615 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:53.615 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:53.615 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:53.615 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:53.615 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:53.615 09:53:32 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:53.615 09:53:32 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:53.615 09:53:32 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:53.876 09:53:32 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:53.876 09:53:32 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:53.876 09:53:32 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.876 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:53.876 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:53.876 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:53.876 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:53.876 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:54.137 { 00:20:54.137 "name": "0dc61620-61b2-4ac0-971e-77fe1ea2f535", 00:20:54.137 "aliases": [ 00:20:54.137 "lvs/nvme0n1p0" 00:20:54.137 ], 00:20:54.137 "product_name": "Logical Volume", 00:20:54.137 "block_size": 4096, 00:20:54.137 "num_blocks": 26476544, 00:20:54.137 "uuid": "0dc61620-61b2-4ac0-971e-77fe1ea2f535", 00:20:54.137 "assigned_rate_limits": { 00:20:54.137 "rw_ios_per_sec": 0, 00:20:54.137 "rw_mbytes_per_sec": 0, 00:20:54.137 "r_mbytes_per_sec": 0, 00:20:54.137 "w_mbytes_per_sec": 0 00:20:54.137 }, 00:20:54.137 "claimed": false, 00:20:54.137 "zoned": false, 00:20:54.137 "supported_io_types": { 00:20:54.137 "read": true, 00:20:54.137 "write": true, 00:20:54.137 "unmap": true, 00:20:54.137 "flush": false, 00:20:54.137 "reset": true, 00:20:54.137 "nvme_admin": false, 00:20:54.137 "nvme_io": false, 00:20:54.137 "nvme_io_md": false, 00:20:54.137 "write_zeroes": true, 00:20:54.137 "zcopy": false, 00:20:54.137 "get_zone_info": false, 00:20:54.137 "zone_management": false, 00:20:54.137 "zone_append": false, 00:20:54.137 "compare": false, 00:20:54.137 "compare_and_write": false, 00:20:54.137 "abort": false, 00:20:54.137 "seek_hole": true, 00:20:54.137 "seek_data": true, 00:20:54.137 "copy": false, 00:20:54.137 "nvme_iov_md": false 00:20:54.137 }, 00:20:54.137 "driver_specific": { 00:20:54.137 "lvol": { 00:20:54.137 "lvol_store_uuid": "fb2aac14-ead1-4b96-811f-424ffc26b932", 00:20:54.137 "base_bdev": "nvme0n1", 00:20:54.137 "thin_provision": true, 00:20:54.137 "num_allocated_clusters": 0, 00:20:54.137 "snapshot": false, 00:20:54.137 "clone": false, 00:20:54.137 "esnap_clone": false 00:20:54.137 } 00:20:54.137 } 00:20:54.137 } 00:20:54.137 ]' 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:54.137 09:53:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:54.137 09:53:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:54.137 09:53:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:54.398 09:53:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:54.398 09:53:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:54.398 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:54.398 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:54.398 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:54.398 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:54.398 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dc61620-61b2-4ac0-971e-77fe1ea2f535 00:20:54.398 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:54.398 { 00:20:54.398 "name": "0dc61620-61b2-4ac0-971e-77fe1ea2f535", 00:20:54.398 "aliases": [ 00:20:54.398 "lvs/nvme0n1p0" 00:20:54.398 ], 00:20:54.398 "product_name": "Logical Volume", 00:20:54.398 "block_size": 4096, 00:20:54.398 "num_blocks": 26476544, 00:20:54.398 "uuid": "0dc61620-61b2-4ac0-971e-77fe1ea2f535", 00:20:54.398 "assigned_rate_limits": { 00:20:54.398 "rw_ios_per_sec": 0, 00:20:54.398 "rw_mbytes_per_sec": 0, 00:20:54.398 "r_mbytes_per_sec": 0, 00:20:54.398 "w_mbytes_per_sec": 0 00:20:54.398 }, 00:20:54.398 "claimed": false, 00:20:54.398 "zoned": false, 00:20:54.398 "supported_io_types": { 00:20:54.398 "read": true, 00:20:54.398 "write": true, 00:20:54.398 "unmap": true, 00:20:54.398 "flush": false, 00:20:54.398 "reset": true, 00:20:54.398 "nvme_admin": false, 00:20:54.398 "nvme_io": false, 00:20:54.398 "nvme_io_md": false, 00:20:54.398 "write_zeroes": true, 00:20:54.399 "zcopy": false, 00:20:54.399 "get_zone_info": false, 00:20:54.399 "zone_management": false, 00:20:54.399 "zone_append": false, 00:20:54.399 "compare": false, 00:20:54.399 "compare_and_write": false, 00:20:54.399 "abort": false, 00:20:54.399 "seek_hole": true, 00:20:54.399 "seek_data": true, 00:20:54.399 "copy": false, 00:20:54.399 "nvme_iov_md": false 00:20:54.399 }, 00:20:54.399 "driver_specific": { 00:20:54.399 "lvol": { 00:20:54.399 "lvol_store_uuid": "fb2aac14-ead1-4b96-811f-424ffc26b932", 00:20:54.399 "base_bdev": "nvme0n1", 00:20:54.399 "thin_provision": true, 00:20:54.399 "num_allocated_clusters": 0, 00:20:54.399 "snapshot": false, 00:20:54.399 "clone": false, 00:20:54.399 "esnap_clone": false 00:20:54.399 } 00:20:54.399 } 00:20:54.399 } 00:20:54.399 ]' 00:20:54.399 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:54.399 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:54.399 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:54.660 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:54.660 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:54.660 09:53:33 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:54.660 09:53:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:54.661 09:53:33 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0dc61620-61b2-4ac0-971e-77fe1ea2f535 --l2p_dram_limit 10' 00:20:54.661 09:53:33 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:54.661 09:53:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:54.661 09:53:33 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:54.661 09:53:33 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:54.661 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:54.661 09:53:33 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0dc61620-61b2-4ac0-971e-77fe1ea2f535 --l2p_dram_limit 10 -c nvc0n1p0 00:20:54.661 [2024-11-28 09:53:33.464248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.464415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.661 [2024-11-28 09:53:33.464437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.661 [2024-11-28 09:53:33.464444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.464494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.464503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.661 [2024-11-28 09:53:33.464512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:54.661 [2024-11-28 09:53:33.464518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.464536] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.661 [2024-11-28 09:53:33.465052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.661 [2024-11-28 09:53:33.465069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.465075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.661 [2024-11-28 09:53:33.465085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:54.661 [2024-11-28 09:53:33.465091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.465142] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c9e2861e-73d9-47cb-81fe-9ef994e3fa71 00:20:54.661 [2024-11-28 09:53:33.466454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.466485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:54.661 [2024-11-28 09:53:33.466494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:54.661 [2024-11-28 09:53:33.466504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.473354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.473422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.661 [2024-11-28 09:53:33.473430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:20:54.661 [2024-11-28 09:53:33.473437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.473506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.473515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.661 [2024-11-28 09:53:33.473521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:54.661 [2024-11-28 09:53:33.473533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.473571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.473582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.661 [2024-11-28 09:53:33.473591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:54.661 [2024-11-28 09:53:33.473599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.473615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.661 [2024-11-28 09:53:33.476856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.476880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.661 [2024-11-28 09:53:33.476891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:20:54.661 [2024-11-28 09:53:33.476897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.476927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.476933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.661 [2024-11-28 09:53:33.476942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.661 [2024-11-28 09:53:33.476947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.476962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:54.661 [2024-11-28 09:53:33.477071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.661 [2024-11-28 09:53:33.477084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.661 [2024-11-28 09:53:33.477093] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.661 [2024-11-28 09:53:33.477103] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477111] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477119] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.661 [2024-11-28 09:53:33.477126] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.661 [2024-11-28 09:53:33.477133] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.661 [2024-11-28 09:53:33.477139] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.661 [2024-11-28 09:53:33.477147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.477172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.661 [2024-11-28 09:53:33.477180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:20:54.661 [2024-11-28 09:53:33.477186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.477255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-28 09:53:33.477262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.661 [2024-11-28 09:53:33.477270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:54.661 [2024-11-28 09:53:33.477275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-28 09:53:33.477368] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.661 [2024-11-28 09:53:33.477376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.661 [2024-11-28 09:53:33.477384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.661 [2024-11-28 09:53:33.477403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.661 [2024-11-28 09:53:33.477423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.661 [2024-11-28 09:53:33.477435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.661 [2024-11-28 09:53:33.477443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.661 [2024-11-28 09:53:33.477450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.661 [2024-11-28 09:53:33.477462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.661 [2024-11-28 09:53:33.477469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:54.661 [2024-11-28 09:53:33.477474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.661 [2024-11-28 09:53:33.477490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.661 [2024-11-28 09:53:33.477510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.661 [2024-11-28 09:53:33.477530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.661 [2024-11-28 09:53:33.477550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.661 [2024-11-28 09:53:33.477566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.661 [2024-11-28 09:53:33.477578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.661 [2024-11-28 09:53:33.477587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.661 [2024-11-28 09:53:33.477599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.661 [2024-11-28 09:53:33.477604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:54.661 [2024-11-28 09:53:33.477611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.661 [2024-11-28 09:53:33.477615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.661 [2024-11-28 09:53:33.477622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:54.661 [2024-11-28 09:53:33.477627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.661 [2024-11-28 09:53:33.477635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.662 [2024-11-28 09:53:33.477640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:54.662 [2024-11-28 09:53:33.477647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.662 [2024-11-28 09:53:33.477652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.662 [2024-11-28 09:53:33.477659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.662 [2024-11-28 09:53:33.477665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.662 [2024-11-28 09:53:33.477673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.662 [2024-11-28 09:53:33.477679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.662 [2024-11-28 09:53:33.477688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.662 [2024-11-28 09:53:33.477694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.662 [2024-11-28 09:53:33.477701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.662 [2024-11-28 09:53:33.477705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.662 [2024-11-28 09:53:33.477712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.662 [2024-11-28 09:53:33.477721] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.662 [2024-11-28 09:53:33.477733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.662 [2024-11-28 09:53:33.477747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:54.662 [2024-11-28 09:53:33.477752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:54.662 [2024-11-28 09:53:33.477760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:54.662 [2024-11-28 09:53:33.477766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:54.662 [2024-11-28 09:53:33.477772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:54.662 [2024-11-28 09:53:33.477778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:54.662 [2024-11-28 09:53:33.477786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:54.662 [2024-11-28 09:53:33.477792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:54.662 [2024-11-28 09:53:33.477801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:54.662 [2024-11-28 09:53:33.477832] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.662 [2024-11-28 09:53:33.477840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.662 [2024-11-28 09:53:33.477853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.662 [2024-11-28 09:53:33.477859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.662 [2024-11-28 09:53:33.477866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.662 [2024-11-28 09:53:33.477872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.662 [2024-11-28 09:53:33.477879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.662 [2024-11-28 09:53:33.477884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:20:54.662 [2024-11-28 09:53:33.477892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.662 [2024-11-28 09:53:33.477934] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:54.662 [2024-11-28 09:53:33.477953] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:58.873 [2024-11-28 09:53:36.918661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.918700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:58.873 [2024-11-28 09:53:36.918710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3440.716 ms 00:20:58.873 [2024-11-28 09:53:36.918719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.942228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.942373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.873 [2024-11-28 09:53:36.942387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.342 ms 00:20:58.873 [2024-11-28 09:53:36.942395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.942482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.942491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:58.873 [2024-11-28 09:53:36.942498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:58.873 [2024-11-28 09:53:36.942509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.969251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.969375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.873 [2024-11-28 09:53:36.969389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.713 ms 00:20:58.873 [2024-11-28 09:53:36.969397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.969423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.969432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.873 [2024-11-28 09:53:36.969439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:58.873 [2024-11-28 09:53:36.969452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.969865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.969881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.873 [2024-11-28 09:53:36.969889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:58.873 [2024-11-28 09:53:36.969897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.969981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.969992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.873 [2024-11-28 09:53:36.969998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:58.873 [2024-11-28 09:53:36.970008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:36.983074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:36.983103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.873 [2024-11-28 09:53:36.983112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.051 ms 00:20:58.873 [2024-11-28 09:53:36.983120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:37.016875] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:58.873 [2024-11-28 09:53:37.020292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:37.020319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:58.873 [2024-11-28 09:53:37.020332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.086 ms 00:20:58.873 [2024-11-28 09:53:37.020338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:37.097228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:37.097272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:58.873 [2024-11-28 09:53:37.097284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.859 ms 00:20:58.873 [2024-11-28 09:53:37.097290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:37.097443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:37.097452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:58.873 [2024-11-28 09:53:37.097463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:58.873 [2024-11-28 09:53:37.097470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:37.115979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:37.116087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:58.873 [2024-11-28 09:53:37.116105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.473 ms 00:20:58.873 [2024-11-28 09:53:37.116112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.873 [2024-11-28 09:53:37.133852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.873 [2024-11-28 09:53:37.133877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:58.873 [2024-11-28 09:53:37.133888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:20:58.873 [2024-11-28 09:53:37.133894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.134386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.134397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.874 [2024-11-28 09:53:37.134407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:20:58.874 [2024-11-28 09:53:37.134414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.198880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.198916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:58.874 [2024-11-28 09:53:37.198929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.429 ms 00:20:58.874 [2024-11-28 09:53:37.198936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.218769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.218795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:58.874 [2024-11-28 09:53:37.218806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.777 ms 00:20:58.874 [2024-11-28 09:53:37.218812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.237192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.237302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:58.874 [2024-11-28 09:53:37.237318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.350 ms 00:20:58.874 [2024-11-28 09:53:37.237324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.256220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.256323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:58.874 [2024-11-28 09:53:37.256339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.868 ms 00:20:58.874 [2024-11-28 09:53:37.256345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.256376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.256383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:58.874 [2024-11-28 09:53:37.256394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.874 [2024-11-28 09:53:37.256400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.256468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.256477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:58.874 [2024-11-28 09:53:37.256485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:58.874 [2024-11-28 09:53:37.256491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.257318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3792.681 ms, result 0 00:20:58.874 { 00:20:58.874 "name": "ftl0", 00:20:58.874 "uuid": "c9e2861e-73d9-47cb-81fe-9ef994e3fa71" 00:20:58.874 } 00:20:58.874 09:53:37 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:58.874 09:53:37 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:58.874 09:53:37 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:58.874 09:53:37 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:58.874 [2024-11-28 09:53:37.660804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.660840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.874 [2024-11-28 09:53:37.660850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:58.874 [2024-11-28 09:53:37.660858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.660875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.874 [2024-11-28 09:53:37.663137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.663168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.874 [2024-11-28 09:53:37.663178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:20:58.874 [2024-11-28 09:53:37.663184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.663394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.663403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.874 [2024-11-28 09:53:37.663412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:20:58.874 [2024-11-28 09:53:37.663418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.665855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.665872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.874 [2024-11-28 09:53:37.665882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:20:58.874 [2024-11-28 09:53:37.665889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.670543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.670565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.874 [2024-11-28 09:53:37.670575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:20:58.874 [2024-11-28 09:53:37.670582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.688705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.688731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.874 [2024-11-28 09:53:37.688741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.066 ms 00:20:58.874 [2024-11-28 09:53:37.688748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.701955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.702068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.874 [2024-11-28 09:53:37.702084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.173 ms 00:20:58.874 [2024-11-28 09:53:37.702091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.702215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.702224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.874 [2024-11-28 09:53:37.702233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:58.874 [2024-11-28 09:53:37.702239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.720617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.720717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.874 [2024-11-28 09:53:37.720732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.360 ms 00:20:58.874 [2024-11-28 09:53:37.720738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.874 [2024-11-28 09:53:37.738727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.874 [2024-11-28 09:53:37.738752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.874 [2024-11-28 09:53:37.738762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.961 ms 00:20:58.874 [2024-11-28 09:53:37.738768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.137 [2024-11-28 09:53:37.756096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.137 [2024-11-28 09:53:37.756120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:59.137 [2024-11-28 09:53:37.756130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.298 ms 00:20:59.137 [2024-11-28 09:53:37.756136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.137 [2024-11-28 09:53:37.773800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.137 [2024-11-28 09:53:37.773825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:59.137 [2024-11-28 09:53:37.773834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.598 ms 00:20:59.137 [2024-11-28 09:53:37.773840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.137 [2024-11-28 09:53:37.773868] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:59.137 [2024-11-28 09:53:37.773879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.773999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:59.137 [2024-11-28 09:53:37.774237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:59.138 [2024-11-28 09:53:37.774694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:59.138 [2024-11-28 09:53:37.774701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9e2861e-73d9-47cb-81fe-9ef994e3fa71 00:20:59.138 [2024-11-28 09:53:37.774707] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:59.138 [2024-11-28 09:53:37.774716] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:59.138 [2024-11-28 09:53:37.774724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:59.138 [2024-11-28 09:53:37.774731] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:59.138 [2024-11-28 09:53:37.774736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:59.138 [2024-11-28 09:53:37.774743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:59.138 [2024-11-28 09:53:37.774748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:59.138 [2024-11-28 09:53:37.774755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:59.138 [2024-11-28 09:53:37.774760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:59.138 [2024-11-28 09:53:37.774767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.138 [2024-11-28 09:53:37.774772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:59.138 [2024-11-28 09:53:37.774781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:20:59.138 [2024-11-28 09:53:37.774789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.138 [2024-11-28 09:53:37.784532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.138 [2024-11-28 09:53:37.784555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:59.138 [2024-11-28 09:53:37.784565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.708 ms 00:20:59.138 [2024-11-28 09:53:37.784571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.138 [2024-11-28 09:53:37.784856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.138 [2024-11-28 09:53:37.784864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:59.138 [2024-11-28 09:53:37.784874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:59.138 [2024-11-28 09:53:37.784880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.138 [2024-11-28 09:53:37.819553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.138 [2024-11-28 09:53:37.819579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.138 [2024-11-28 09:53:37.819589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.138 [2024-11-28 09:53:37.819596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.138 [2024-11-28 09:53:37.819643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.138 [2024-11-28 09:53:37.819650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.138 [2024-11-28 09:53:37.819659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.138 [2024-11-28 09:53:37.819665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.819723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.819731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.139 [2024-11-28 09:53:37.819740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.819746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.819764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.819771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.139 [2024-11-28 09:53:37.819779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.819786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.883509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.883543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.139 [2024-11-28 09:53:37.883554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.883561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.139 [2024-11-28 09:53:37.935057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.139 [2024-11-28 09:53:37.935186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.139 [2024-11-28 09:53:37.935249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.139 [2024-11-28 09:53:37.935355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.139 [2024-11-28 09:53:37.935409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.139 [2024-11-28 09:53:37.935469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.139 [2024-11-28 09:53:37.935526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.139 [2024-11-28 09:53:37.935535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.139 [2024-11-28 09:53:37.935542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.139 [2024-11-28 09:53:37.935661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.819 ms, result 0 00:20:59.139 true 00:20:59.139 09:53:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77351 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77351 ']' 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77351 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77351 00:20:59.139 killing process with pid 77351 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77351' 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77351 00:20:59.139 09:53:37 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77351 00:21:05.727 09:53:43 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:09.036 262144+0 records in 00:21:09.036 262144+0 records out 00:21:09.036 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.12481 s, 260 MB/s 00:21:09.036 09:53:47 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:10.951 09:53:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:10.951 [2024-11-28 09:53:49.715016] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:10.951 [2024-11-28 09:53:49.715129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77578 ] 00:21:11.212 [2024-11-28 09:53:49.876241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.212 [2024-11-28 09:53:50.015966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.474 [2024-11-28 09:53:50.325007] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.474 [2024-11-28 09:53:50.325078] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.738 [2024-11-28 09:53:50.484585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.738 [2024-11-28 09:53:50.484634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.738 [2024-11-28 09:53:50.484648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:11.738 [2024-11-28 09:53:50.484657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.738 [2024-11-28 09:53:50.484706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.738 [2024-11-28 09:53:50.484719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.738 [2024-11-28 09:53:50.484728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:11.738 [2024-11-28 09:53:50.484735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.738 [2024-11-28 09:53:50.484755] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.738 [2024-11-28 09:53:50.485479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.738 [2024-11-28 09:53:50.485498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.738 [2024-11-28 09:53:50.485506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.738 [2024-11-28 09:53:50.485516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:21:11.738 [2024-11-28 09:53:50.485523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.738 [2024-11-28 09:53:50.487097] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:11.738 [2024-11-28 09:53:50.500863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.738 [2024-11-28 09:53:50.500900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:11.738 [2024-11-28 09:53:50.500912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.771 ms 00:21:11.738 [2024-11-28 09:53:50.500921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.738 [2024-11-28 09:53:50.500981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.500991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:11.739 [2024-11-28 09:53:50.501000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:11.739 [2024-11-28 09:53:50.501007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.508178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.508205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.739 [2024-11-28 09:53:50.508216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.109 ms 00:21:11.739 [2024-11-28 09:53:50.508228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.508304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.508314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.739 [2024-11-28 09:53:50.508322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:11.739 [2024-11-28 09:53:50.508330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.508365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.508375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.739 [2024-11-28 09:53:50.508383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:11.739 [2024-11-28 09:53:50.508391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.508416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.739 [2024-11-28 09:53:50.512018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.512047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.739 [2024-11-28 09:53:50.512059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:21:11.739 [2024-11-28 09:53:50.512066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.512095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.512104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.739 [2024-11-28 09:53:50.512112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:11.739 [2024-11-28 09:53:50.512119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.512164] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:11.739 [2024-11-28 09:53:50.512187] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:11.739 [2024-11-28 09:53:50.512223] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:11.739 [2024-11-28 09:53:50.512241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:11.739 [2024-11-28 09:53:50.512350] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.739 [2024-11-28 09:53:50.512361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.739 [2024-11-28 09:53:50.512372] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:11.739 [2024-11-28 09:53:50.512382] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512392] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512400] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.739 [2024-11-28 09:53:50.512408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.739 [2024-11-28 09:53:50.512418] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.739 [2024-11-28 09:53:50.512426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.739 [2024-11-28 09:53:50.512434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.512442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.739 [2024-11-28 09:53:50.512450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:11.739 [2024-11-28 09:53:50.512458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.512541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.739 [2024-11-28 09:53:50.512550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.739 [2024-11-28 09:53:50.512558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:11.739 [2024-11-28 09:53:50.512567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.739 [2024-11-28 09:53:50.512688] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.739 [2024-11-28 09:53:50.512701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.739 [2024-11-28 09:53:50.512709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.739 [2024-11-28 09:53:50.512732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.739 [2024-11-28 09:53:50.512753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.739 [2024-11-28 09:53:50.512767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.739 [2024-11-28 09:53:50.512773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.739 [2024-11-28 09:53:50.512781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.739 [2024-11-28 09:53:50.512793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.739 [2024-11-28 09:53:50.512799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.739 [2024-11-28 09:53:50.512806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.739 [2024-11-28 09:53:50.512820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.739 [2024-11-28 09:53:50.512841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.739 [2024-11-28 09:53:50.512861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.739 [2024-11-28 09:53:50.512882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.739 [2024-11-28 09:53:50.512902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.739 [2024-11-28 09:53:50.512914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.739 [2024-11-28 09:53:50.512921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.739 [2024-11-28 09:53:50.512935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.739 [2024-11-28 09:53:50.512941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.739 [2024-11-28 09:53:50.512948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.739 [2024-11-28 09:53:50.512954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.739 [2024-11-28 09:53:50.512960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.739 [2024-11-28 09:53:50.512967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.739 [2024-11-28 09:53:50.512981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.739 [2024-11-28 09:53:50.512987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.512993] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.739 [2024-11-28 09:53:50.513001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.739 [2024-11-28 09:53:50.513008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.739 [2024-11-28 09:53:50.513015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.739 [2024-11-28 09:53:50.513023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.739 [2024-11-28 09:53:50.513029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.739 [2024-11-28 09:53:50.513038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.739 [2024-11-28 09:53:50.513046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.739 [2024-11-28 09:53:50.513052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.739 [2024-11-28 09:53:50.513058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.739 [2024-11-28 09:53:50.513067] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.739 [2024-11-28 09:53:50.513077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.739 [2024-11-28 09:53:50.513093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.739 [2024-11-28 09:53:50.513101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.739 [2024-11-28 09:53:50.513108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.740 [2024-11-28 09:53:50.513116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.740 [2024-11-28 09:53:50.513124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.740 [2024-11-28 09:53:50.513133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.740 [2024-11-28 09:53:50.513141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.740 [2024-11-28 09:53:50.513148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.740 [2024-11-28 09:53:50.513169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.740 [2024-11-28 09:53:50.513175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.740 [2024-11-28 09:53:50.513182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.740 [2024-11-28 09:53:50.513190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.740 [2024-11-28 09:53:50.513198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.740 [2024-11-28 09:53:50.513205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.740 [2024-11-28 09:53:50.513212] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.740 [2024-11-28 09:53:50.513221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.740 [2024-11-28 09:53:50.513229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.740 [2024-11-28 09:53:50.513236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.740 [2024-11-28 09:53:50.513245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.740 [2024-11-28 09:53:50.513252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.740 [2024-11-28 09:53:50.513260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.513267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.740 [2024-11-28 09:53:50.513275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:21:11.740 [2024-11-28 09:53:50.513282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.543596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.543632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.740 [2024-11-28 09:53:50.543643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.269 ms 00:21:11.740 [2024-11-28 09:53:50.543654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.543739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.543747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.740 [2024-11-28 09:53:50.543756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:11.740 [2024-11-28 09:53:50.543763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.586271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.586312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.740 [2024-11-28 09:53:50.586324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.459 ms 00:21:11.740 [2024-11-28 09:53:50.586333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.586376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.586386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.740 [2024-11-28 09:53:50.586398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:11.740 [2024-11-28 09:53:50.586406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.586912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.586930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.740 [2024-11-28 09:53:50.586940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:11.740 [2024-11-28 09:53:50.586949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.587090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.587101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.740 [2024-11-28 09:53:50.587115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:11.740 [2024-11-28 09:53:50.587123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.740 [2024-11-28 09:53:50.602362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.740 [2024-11-28 09:53:50.602599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.740 [2024-11-28 09:53:50.602616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.221 ms 00:21:11.740 [2024-11-28 09:53:50.602625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.044 [2024-11-28 09:53:50.616244] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:12.044 [2024-11-28 09:53:50.616279] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.044 [2024-11-28 09:53:50.616292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.044 [2024-11-28 09:53:50.616301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.045 [2024-11-28 09:53:50.616310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.562 ms 00:21:12.045 [2024-11-28 09:53:50.616317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.640599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.640637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.045 [2024-11-28 09:53:50.640648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.244 ms 00:21:12.045 [2024-11-28 09:53:50.640655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.652393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.652524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.045 [2024-11-28 09:53:50.652540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.696 ms 00:21:12.045 [2024-11-28 09:53:50.652548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.664544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.664576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.045 [2024-11-28 09:53:50.664586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.966 ms 00:21:12.045 [2024-11-28 09:53:50.664594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.665208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.665229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.045 [2024-11-28 09:53:50.665240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:21:12.045 [2024-11-28 09:53:50.665251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.726169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.726211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.045 [2024-11-28 09:53:50.726226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.901 ms 00:21:12.045 [2024-11-28 09:53:50.726239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.736976] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.045 [2024-11-28 09:53:50.739968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.740112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.045 [2024-11-28 09:53:50.740128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.682 ms 00:21:12.045 [2024-11-28 09:53:50.740136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.740243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.740256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.045 [2024-11-28 09:53:50.740267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:12.045 [2024-11-28 09:53:50.740275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.740341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.740351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.045 [2024-11-28 09:53:50.740360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:12.045 [2024-11-28 09:53:50.740368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.740387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.740396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.045 [2024-11-28 09:53:50.740405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.045 [2024-11-28 09:53:50.740412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.740447] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.045 [2024-11-28 09:53:50.740459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.740468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.045 [2024-11-28 09:53:50.740476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:12.045 [2024-11-28 09:53:50.740483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.764840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.764875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.045 [2024-11-28 09:53:50.764887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.339 ms 00:21:12.045 [2024-11-28 09:53:50.764900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.764974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.045 [2024-11-28 09:53:50.764984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.045 [2024-11-28 09:53:50.764993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:12.045 [2024-11-28 09:53:50.765001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.045 [2024-11-28 09:53:50.766108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.056 ms, result 0 00:21:13.010  [2024-11-28T09:53:52.831Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-28T09:53:54.218Z] Copying: 40/1024 [MB] (21 MBps) [2024-11-28T09:53:54.792Z] Copying: 63/1024 [MB] (22 MBps) [2024-11-28T09:53:56.177Z] Copying: 78/1024 [MB] (15 MBps) [2024-11-28T09:53:57.122Z] Copying: 101/1024 [MB] (22 MBps) [2024-11-28T09:53:58.067Z] Copying: 117/1024 [MB] (16 MBps) [2024-11-28T09:53:59.018Z] Copying: 136/1024 [MB] (18 MBps) [2024-11-28T09:53:59.965Z] Copying: 158/1024 [MB] (21 MBps) [2024-11-28T09:54:00.908Z] Copying: 178/1024 [MB] (20 MBps) [2024-11-28T09:54:01.849Z] Copying: 191/1024 [MB] (12 MBps) [2024-11-28T09:54:02.790Z] Copying: 202/1024 [MB] (11 MBps) [2024-11-28T09:54:04.179Z] Copying: 213/1024 [MB] (11 MBps) [2024-11-28T09:54:05.124Z] Copying: 225/1024 [MB] (11 MBps) [2024-11-28T09:54:06.067Z] Copying: 236/1024 [MB] (11 MBps) [2024-11-28T09:54:07.011Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-28T09:54:07.956Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-28T09:54:08.900Z] Copying: 270/1024 [MB] (11 MBps) [2024-11-28T09:54:09.846Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-28T09:54:10.789Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-28T09:54:12.176Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-28T09:54:13.120Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-28T09:54:14.063Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-28T09:54:15.007Z] Copying: 336/1024 [MB] (10 MBps) [2024-11-28T09:54:15.957Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-28T09:54:16.900Z] Copying: 358/1024 [MB] (11 MBps) [2024-11-28T09:54:17.843Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-28T09:54:18.783Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-28T09:54:20.168Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-28T09:54:21.111Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-28T09:54:22.053Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-28T09:54:22.997Z] Copying: 425/1024 [MB] (10 MBps) [2024-11-28T09:54:23.942Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-28T09:54:24.886Z] Copying: 447/1024 [MB] (11 MBps) [2024-11-28T09:54:25.931Z] Copying: 459/1024 [MB] (11 MBps) [2024-11-28T09:54:26.876Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-28T09:54:27.823Z] Copying: 481/1024 [MB] (11 MBps) [2024-11-28T09:54:29.213Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-28T09:54:29.786Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-28T09:54:31.174Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-28T09:54:32.120Z] Copying: 526/1024 [MB] (11 MBps) [2024-11-28T09:54:33.062Z] Copying: 537/1024 [MB] (10 MBps) [2024-11-28T09:54:34.008Z] Copying: 547/1024 [MB] (10 MBps) [2024-11-28T09:54:34.951Z] Copying: 558/1024 [MB] (11 MBps) [2024-11-28T09:54:35.892Z] Copying: 568/1024 [MB] (10 MBps) [2024-11-28T09:54:36.835Z] Copying: 579/1024 [MB] (10 MBps) [2024-11-28T09:54:38.221Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-28T09:54:38.793Z] Copying: 602/1024 [MB] (11 MBps) [2024-11-28T09:54:40.180Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-28T09:54:41.124Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-28T09:54:42.068Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-28T09:54:43.013Z] Copying: 645/1024 [MB] (11 MBps) [2024-11-28T09:54:43.958Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-28T09:54:44.904Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-28T09:54:45.849Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-28T09:54:46.793Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-28T09:54:48.181Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-28T09:54:49.126Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-28T09:54:50.070Z] Copying: 721/1024 [MB] (10 MBps) [2024-11-28T09:54:51.011Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-28T09:54:51.952Z] Copying: 744/1024 [MB] (11 MBps) [2024-11-28T09:54:52.893Z] Copying: 755/1024 [MB] (11 MBps) [2024-11-28T09:54:53.836Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-28T09:54:54.779Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-28T09:54:56.161Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-28T09:54:57.105Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-28T09:54:58.049Z] Copying: 810/1024 [MB] (11 MBps) [2024-11-28T09:54:58.992Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-28T09:54:59.933Z] Copying: 833/1024 [MB] (11 MBps) [2024-11-28T09:55:00.973Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-28T09:55:01.917Z] Copying: 857/1024 [MB] (13 MBps) [2024-11-28T09:55:02.860Z] Copying: 868/1024 [MB] (11 MBps) [2024-11-28T09:55:03.802Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-28T09:55:05.189Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-28T09:55:06.134Z] Copying: 902/1024 [MB] (11 MBps) [2024-11-28T09:55:07.078Z] Copying: 934544/1048576 [kB] (10152 kBps) [2024-11-28T09:55:08.021Z] Copying: 923/1024 [MB] (11 MBps) [2024-11-28T09:55:08.965Z] Copying: 935/1024 [MB] (11 MBps) [2024-11-28T09:55:09.908Z] Copying: 946/1024 [MB] (11 MBps) [2024-11-28T09:55:10.852Z] Copying: 957/1024 [MB] (11 MBps) [2024-11-28T09:55:11.796Z] Copying: 968/1024 [MB] (11 MBps) [2024-11-28T09:55:13.180Z] Copying: 981/1024 [MB] (12 MBps) [2024-11-28T09:55:14.123Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-28T09:55:15.067Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-28T09:55:15.328Z] Copying: 1016/1024 [MB] (12 MBps) [2024-11-28T09:55:15.328Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-28 09:55:15.216407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.216512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:36.448 [2024-11-28 09:55:15.216611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:36.448 [2024-11-28 09:55:15.216633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.216688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:36.448 [2024-11-28 09:55:15.219015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.219104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:36.448 [2024-11-28 09:55:15.219166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:22:36.448 [2024-11-28 09:55:15.219185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.220957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.221035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:36.448 [2024-11-28 09:55:15.221078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:22:36.448 [2024-11-28 09:55:15.221096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.236055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.236142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:36.448 [2024-11-28 09:55:15.236244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.936 ms 00:22:36.448 [2024-11-28 09:55:15.236265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.240961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.241042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:36.448 [2024-11-28 09:55:15.241055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:22:36.448 [2024-11-28 09:55:15.241063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.260562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.260590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:36.448 [2024-11-28 09:55:15.260598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.457 ms 00:22:36.448 [2024-11-28 09:55:15.260605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.272505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.272532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:36.448 [2024-11-28 09:55:15.272541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.875 ms 00:22:36.448 [2024-11-28 09:55:15.272547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.272635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.272646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:36.448 [2024-11-28 09:55:15.272653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:36.448 [2024-11-28 09:55:15.272659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.291343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.291368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:36.448 [2024-11-28 09:55:15.291376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.673 ms 00:22:36.448 [2024-11-28 09:55:15.291382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.448 [2024-11-28 09:55:15.309223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.448 [2024-11-28 09:55:15.309247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:36.448 [2024-11-28 09:55:15.309254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.818 ms 00:22:36.448 [2024-11-28 09:55:15.309260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.449 [2024-11-28 09:55:15.326837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.449 [2024-11-28 09:55:15.326861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:36.449 [2024-11-28 09:55:15.326868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:22:36.449 [2024-11-28 09:55:15.326874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.710 [2024-11-28 09:55:15.344114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.710 [2024-11-28 09:55:15.344137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:36.710 [2024-11-28 09:55:15.344145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.198 ms 00:22:36.710 [2024-11-28 09:55:15.344150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.710 [2024-11-28 09:55:15.344184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:36.710 [2024-11-28 09:55:15.344197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:36.710 [2024-11-28 09:55:15.344649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:36.711 [2024-11-28 09:55:15.344823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:36.711 [2024-11-28 09:55:15.344832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9e2861e-73d9-47cb-81fe-9ef994e3fa71 00:22:36.711 [2024-11-28 09:55:15.344838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:36.711 [2024-11-28 09:55:15.344843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:36.711 [2024-11-28 09:55:15.344849] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:36.711 [2024-11-28 09:55:15.344855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:36.711 [2024-11-28 09:55:15.344860] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:36.711 [2024-11-28 09:55:15.344871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:36.711 [2024-11-28 09:55:15.344877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:36.711 [2024-11-28 09:55:15.344883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:36.711 [2024-11-28 09:55:15.344888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:36.711 [2024-11-28 09:55:15.344893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.711 [2024-11-28 09:55:15.344899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:36.711 [2024-11-28 09:55:15.344906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:22:36.711 [2024-11-28 09:55:15.344912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.355140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.711 [2024-11-28 09:55:15.355171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:36.711 [2024-11-28 09:55:15.355180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.215 ms 00:22:36.711 [2024-11-28 09:55:15.355186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.355488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.711 [2024-11-28 09:55:15.355497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:36.711 [2024-11-28 09:55:15.355503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:22:36.711 [2024-11-28 09:55:15.355512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.383127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.383245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:36.711 [2024-11-28 09:55:15.383259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.383265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.383309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.383316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:36.711 [2024-11-28 09:55:15.383322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.383332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.383378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.383387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:36.711 [2024-11-28 09:55:15.383393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.383399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.383410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.383418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:36.711 [2024-11-28 09:55:15.383424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.383430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.446465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.446587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:36.711 [2024-11-28 09:55:15.446630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.446648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.498126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.498257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:36.711 [2024-11-28 09:55:15.498297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.498327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.498412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.498432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:36.711 [2024-11-28 09:55:15.498448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.498463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.498501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.498518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:36.711 [2024-11-28 09:55:15.498534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.498591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.498689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.498732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:36.711 [2024-11-28 09:55:15.498753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.498819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.498862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.498881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:36.711 [2024-11-28 09:55:15.498897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.498913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.498959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.499052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:36.711 [2024-11-28 09:55:15.499067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.499082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.499130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.711 [2024-11-28 09:55:15.499149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:36.711 [2024-11-28 09:55:15.499208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.711 [2024-11-28 09:55:15.499227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.711 [2024-11-28 09:55:15.499387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 282.946 ms, result 0 00:22:37.283 00:22:37.283 00:22:37.284 09:55:16 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:37.545 [2024-11-28 09:55:16.221005] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:37.545 [2024-11-28 09:55:16.221273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78474 ] 00:22:37.545 [2024-11-28 09:55:16.376343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.808 [2024-11-28 09:55:16.467695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.069 [2024-11-28 09:55:16.699066] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:38.069 [2024-11-28 09:55:16.699288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:38.069 [2024-11-28 09:55:16.854569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.854712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:38.069 [2024-11-28 09:55:16.854764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:38.069 [2024-11-28 09:55:16.854783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.069 [2024-11-28 09:55:16.854839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.854861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:38.069 [2024-11-28 09:55:16.854876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:38.069 [2024-11-28 09:55:16.854892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.069 [2024-11-28 09:55:16.854917] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:38.069 [2024-11-28 09:55:16.855480] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:38.069 [2024-11-28 09:55:16.855565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.855605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:38.069 [2024-11-28 09:55:16.855624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:22:38.069 [2024-11-28 09:55:16.855639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.069 [2024-11-28 09:55:16.857125] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:38.069 [2024-11-28 09:55:16.867399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.867499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:38.069 [2024-11-28 09:55:16.867549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.276 ms 00:22:38.069 [2024-11-28 09:55:16.867567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.069 [2024-11-28 09:55:16.867619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.867639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:38.069 [2024-11-28 09:55:16.867654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:38.069 [2024-11-28 09:55:16.867669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.069 [2024-11-28 09:55:16.874000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.874090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:38.069 [2024-11-28 09:55:16.874170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.278 ms 00:22:38.069 [2024-11-28 09:55:16.874193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.069 [2024-11-28 09:55:16.874259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.069 [2024-11-28 09:55:16.874277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:38.069 [2024-11-28 09:55:16.874299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:38.070 [2024-11-28 09:55:16.874313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.070 [2024-11-28 09:55:16.874406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.070 [2024-11-28 09:55:16.874427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:38.070 [2024-11-28 09:55:16.874443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:38.070 [2024-11-28 09:55:16.874458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.070 [2024-11-28 09:55:16.874488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:38.070 [2024-11-28 09:55:16.877432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.070 [2024-11-28 09:55:16.877513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:38.070 [2024-11-28 09:55:16.877568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:22:38.070 [2024-11-28 09:55:16.877586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.070 [2024-11-28 09:55:16.877622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.070 [2024-11-28 09:55:16.877638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:38.070 [2024-11-28 09:55:16.877653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:38.070 [2024-11-28 09:55:16.877668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.070 [2024-11-28 09:55:16.877693] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:38.070 [2024-11-28 09:55:16.877720] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:38.070 [2024-11-28 09:55:16.877793] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:38.070 [2024-11-28 09:55:16.877829] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:38.070 [2024-11-28 09:55:16.877930] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:38.070 [2024-11-28 09:55:16.878010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:38.070 [2024-11-28 09:55:16.878089] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:38.070 [2024-11-28 09:55:16.878116] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:38.070 [2024-11-28 09:55:16.878141] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:38.070 [2024-11-28 09:55:16.878201] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:38.070 [2024-11-28 09:55:16.878218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:38.070 [2024-11-28 09:55:16.878235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:38.070 [2024-11-28 09:55:16.878250] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:38.070 [2024-11-28 09:55:16.878288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.070 [2024-11-28 09:55:16.878309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:38.070 [2024-11-28 09:55:16.878324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:22:38.070 [2024-11-28 09:55:16.878338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.070 [2024-11-28 09:55:16.878413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.070 [2024-11-28 09:55:16.878430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:38.070 [2024-11-28 09:55:16.878475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:38.070 [2024-11-28 09:55:16.878492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.070 [2024-11-28 09:55:16.878594] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:38.070 [2024-11-28 09:55:16.878614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:38.070 [2024-11-28 09:55:16.878629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.070 [2024-11-28 09:55:16.878644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.878658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:38.070 [2024-11-28 09:55:16.878706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.878724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:38.070 [2024-11-28 09:55:16.878738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:38.070 [2024-11-28 09:55:16.878752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:38.070 [2024-11-28 09:55:16.878766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.070 [2024-11-28 09:55:16.878779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:38.070 [2024-11-28 09:55:16.878793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:38.070 [2024-11-28 09:55:16.878832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.070 [2024-11-28 09:55:16.878877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:38.070 [2024-11-28 09:55:16.878894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:38.070 [2024-11-28 09:55:16.878926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.878942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:38.070 [2024-11-28 09:55:16.878956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:38.070 [2024-11-28 09:55:16.878969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.878983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:38.070 [2024-11-28 09:55:16.878997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.070 [2024-11-28 09:55:16.879074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:38.070 [2024-11-28 09:55:16.879089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.070 [2024-11-28 09:55:16.879116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:38.070 [2024-11-28 09:55:16.879130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.070 [2024-11-28 09:55:16.879252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:38.070 [2024-11-28 09:55:16.879267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.070 [2024-11-28 09:55:16.879295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:38.070 [2024-11-28 09:55:16.879309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.070 [2024-11-28 09:55:16.879366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:38.070 [2024-11-28 09:55:16.879380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:38.070 [2024-11-28 09:55:16.879394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.070 [2024-11-28 09:55:16.879409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:38.070 [2024-11-28 09:55:16.879450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:38.070 [2024-11-28 09:55:16.879467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:38.070 [2024-11-28 09:55:16.879494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:38.070 [2024-11-28 09:55:16.879508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879549] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:38.070 [2024-11-28 09:55:16.879591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:38.070 [2024-11-28 09:55:16.879608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.070 [2024-11-28 09:55:16.879640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.070 [2024-11-28 09:55:16.879658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:38.070 [2024-11-28 09:55:16.879666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:38.070 [2024-11-28 09:55:16.879672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:38.070 [2024-11-28 09:55:16.879677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:38.070 [2024-11-28 09:55:16.879682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:38.070 [2024-11-28 09:55:16.879688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:38.070 [2024-11-28 09:55:16.879695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:38.070 [2024-11-28 09:55:16.879703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.070 [2024-11-28 09:55:16.879714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:38.070 [2024-11-28 09:55:16.879720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:38.070 [2024-11-28 09:55:16.879725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:38.070 [2024-11-28 09:55:16.879731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:38.070 [2024-11-28 09:55:16.879737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:38.070 [2024-11-28 09:55:16.879742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:38.070 [2024-11-28 09:55:16.879748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:38.070 [2024-11-28 09:55:16.879754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:38.070 [2024-11-28 09:55:16.879760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:38.070 [2024-11-28 09:55:16.879766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:38.070 [2024-11-28 09:55:16.879772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:38.071 [2024-11-28 09:55:16.879777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:38.071 [2024-11-28 09:55:16.879783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:38.071 [2024-11-28 09:55:16.879789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:38.071 [2024-11-28 09:55:16.879795] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:38.071 [2024-11-28 09:55:16.879801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.071 [2024-11-28 09:55:16.879807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:38.071 [2024-11-28 09:55:16.879813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:38.071 [2024-11-28 09:55:16.879819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:38.071 [2024-11-28 09:55:16.879825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:38.071 [2024-11-28 09:55:16.879832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.071 [2024-11-28 09:55:16.879838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:38.071 [2024-11-28 09:55:16.879844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:22:38.071 [2024-11-28 09:55:16.879850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.071 [2024-11-28 09:55:16.904013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.071 [2024-11-28 09:55:16.904042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:38.071 [2024-11-28 09:55:16.904051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.124 ms 00:22:38.071 [2024-11-28 09:55:16.904059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.071 [2024-11-28 09:55:16.904128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.071 [2024-11-28 09:55:16.904135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:38.071 [2024-11-28 09:55:16.904142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:38.071 [2024-11-28 09:55:16.904148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.947472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.947503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:38.332 [2024-11-28 09:55:16.947513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.268 ms 00:22:38.332 [2024-11-28 09:55:16.947520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.947553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.947561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:38.332 [2024-11-28 09:55:16.947571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:38.332 [2024-11-28 09:55:16.947577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.947996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.948010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:38.332 [2024-11-28 09:55:16.948017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:22:38.332 [2024-11-28 09:55:16.948023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.948133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.948141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:38.332 [2024-11-28 09:55:16.948167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:38.332 [2024-11-28 09:55:16.948174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.959969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.959996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:38.332 [2024-11-28 09:55:16.960005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.779 ms 00:22:38.332 [2024-11-28 09:55:16.960011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.970762] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:38.332 [2024-11-28 09:55:16.970790] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:38.332 [2024-11-28 09:55:16.970800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.970806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:38.332 [2024-11-28 09:55:16.970814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.717 ms 00:22:38.332 [2024-11-28 09:55:16.970820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.989857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.989985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:38.332 [2024-11-28 09:55:16.989998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.006 ms 00:22:38.332 [2024-11-28 09:55:16.990004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:16.999269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:16.999297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:38.332 [2024-11-28 09:55:16.999305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.230 ms 00:22:38.332 [2024-11-28 09:55:16.999311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:17.008308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.332 [2024-11-28 09:55:17.008405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:38.332 [2024-11-28 09:55:17.008417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.970 ms 00:22:38.332 [2024-11-28 09:55:17.008423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.332 [2024-11-28 09:55:17.008882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.008894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:38.333 [2024-11-28 09:55:17.008903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:22:38.333 [2024-11-28 09:55:17.008911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.058113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.058148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:38.333 [2024-11-28 09:55:17.058173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.189 ms 00:22:38.333 [2024-11-28 09:55:17.058180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.066955] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:38.333 [2024-11-28 09:55:17.069363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.069387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:38.333 [2024-11-28 09:55:17.069397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.146 ms 00:22:38.333 [2024-11-28 09:55:17.069404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.069474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.069482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:38.333 [2024-11-28 09:55:17.069492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:38.333 [2024-11-28 09:55:17.069498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.069564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.069573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:38.333 [2024-11-28 09:55:17.069580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:38.333 [2024-11-28 09:55:17.069586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.069601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.069608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:38.333 [2024-11-28 09:55:17.069614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:38.333 [2024-11-28 09:55:17.069620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.069649] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:38.333 [2024-11-28 09:55:17.069658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.069664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:38.333 [2024-11-28 09:55:17.069671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:38.333 [2024-11-28 09:55:17.069677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.088226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.088252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:38.333 [2024-11-28 09:55:17.088264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.536 ms 00:22:38.333 [2024-11-28 09:55:17.088271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.088328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.333 [2024-11-28 09:55:17.088336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:38.333 [2024-11-28 09:55:17.088343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:38.333 [2024-11-28 09:55:17.088350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.333 [2024-11-28 09:55:17.089333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.376 ms, result 0 00:22:39.719  [2024-11-28T09:55:19.543Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-28T09:55:20.488Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-28T09:55:21.432Z] Copying: 34/1024 [MB] (11 MBps) [2024-11-28T09:55:22.377Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-28T09:55:23.322Z] Copying: 57/1024 [MB] (11 MBps) [2024-11-28T09:55:24.266Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-28T09:55:25.654Z] Copying: 80/1024 [MB] (11 MBps) [2024-11-28T09:55:26.598Z] Copying: 92/1024 [MB] (11 MBps) [2024-11-28T09:55:27.538Z] Copying: 104/1024 [MB] (12 MBps) [2024-11-28T09:55:28.481Z] Copying: 116/1024 [MB] (12 MBps) [2024-11-28T09:55:29.424Z] Copying: 127/1024 [MB] (11 MBps) [2024-11-28T09:55:30.365Z] Copying: 139/1024 [MB] (11 MBps) [2024-11-28T09:55:31.309Z] Copying: 151/1024 [MB] (11 MBps) [2024-11-28T09:55:32.251Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-28T09:55:33.637Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-28T09:55:34.622Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-28T09:55:35.565Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-28T09:55:36.508Z] Copying: 209/1024 [MB] (11 MBps) [2024-11-28T09:55:37.452Z] Copying: 220/1024 [MB] (10 MBps) [2024-11-28T09:55:38.395Z] Copying: 231/1024 [MB] (11 MBps) [2024-11-28T09:55:39.339Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-28T09:55:40.284Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-28T09:55:41.230Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-28T09:55:42.616Z] Copying: 278/1024 [MB] (12 MBps) [2024-11-28T09:55:43.560Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-28T09:55:44.501Z] Copying: 301/1024 [MB] (11 MBps) [2024-11-28T09:55:45.446Z] Copying: 315/1024 [MB] (14 MBps) [2024-11-28T09:55:46.388Z] Copying: 332/1024 [MB] (16 MBps) [2024-11-28T09:55:47.333Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-28T09:55:48.275Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-28T09:55:49.657Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-28T09:55:50.599Z] Copying: 381/1024 [MB] (15 MBps) [2024-11-28T09:55:51.541Z] Copying: 398/1024 [MB] (17 MBps) [2024-11-28T09:55:52.483Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-28T09:55:53.425Z] Copying: 429/1024 [MB] (19 MBps) [2024-11-28T09:55:54.369Z] Copying: 440/1024 [MB] (10 MBps) [2024-11-28T09:55:55.312Z] Copying: 454/1024 [MB] (14 MBps) [2024-11-28T09:55:56.256Z] Copying: 464/1024 [MB] (10 MBps) [2024-11-28T09:55:57.641Z] Copying: 476/1024 [MB] (12 MBps) [2024-11-28T09:55:58.583Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-28T09:55:59.522Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-28T09:56:00.461Z] Copying: 511/1024 [MB] (14 MBps) [2024-11-28T09:56:01.405Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-28T09:56:02.348Z] Copying: 537/1024 [MB] (15 MBps) [2024-11-28T09:56:03.289Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-28T09:56:04.672Z] Copying: 559/1024 [MB] (11 MBps) [2024-11-28T09:56:05.243Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-28T09:56:06.628Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-28T09:56:07.572Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-28T09:56:08.512Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-28T09:56:09.483Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-28T09:56:10.459Z] Copying: 627/1024 [MB] (11 MBps) [2024-11-28T09:56:11.403Z] Copying: 638/1024 [MB] (10 MBps) [2024-11-28T09:56:12.345Z] Copying: 649/1024 [MB] (11 MBps) [2024-11-28T09:56:13.289Z] Copying: 661/1024 [MB] (11 MBps) [2024-11-28T09:56:14.232Z] Copying: 674/1024 [MB] (13 MBps) [2024-11-28T09:56:15.618Z] Copying: 686/1024 [MB] (11 MBps) [2024-11-28T09:56:16.561Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-28T09:56:17.502Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-28T09:56:18.444Z] Copying: 723/1024 [MB] (11 MBps) [2024-11-28T09:56:19.383Z] Copying: 735/1024 [MB] (12 MBps) [2024-11-28T09:56:20.328Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-28T09:56:21.272Z] Copying: 758/1024 [MB] (11 MBps) [2024-11-28T09:56:22.659Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-28T09:56:23.232Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-28T09:56:24.617Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-28T09:56:25.560Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-28T09:56:26.504Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-28T09:56:27.449Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-28T09:56:28.392Z] Copying: 833/1024 [MB] (11 MBps) [2024-11-28T09:56:29.336Z] Copying: 845/1024 [MB] (11 MBps) [2024-11-28T09:56:30.278Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-28T09:56:31.665Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-28T09:56:32.237Z] Copying: 885/1024 [MB] (16 MBps) [2024-11-28T09:56:33.622Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-28T09:56:34.566Z] Copying: 909/1024 [MB] (12 MBps) [2024-11-28T09:56:35.508Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-28T09:56:36.451Z] Copying: 931/1024 [MB] (11 MBps) [2024-11-28T09:56:37.394Z] Copying: 944/1024 [MB] (12 MBps) [2024-11-28T09:56:38.338Z] Copying: 964/1024 [MB] (20 MBps) [2024-11-28T09:56:39.281Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-28T09:56:40.668Z] Copying: 987/1024 [MB] (11 MBps) [2024-11-28T09:56:41.240Z] Copying: 999/1024 [MB] (11 MBps) [2024-11-28T09:56:42.627Z] Copying: 1010/1024 [MB] (11 MBps) [2024-11-28T09:56:42.627Z] Copying: 1021/1024 [MB] (11 MBps) [2024-11-28T09:56:42.627Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-28 09:56:42.606294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.747 [2024-11-28 09:56:42.606364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:03.747 [2024-11-28 09:56:42.606378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:03.747 [2024-11-28 09:56:42.606385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.747 [2024-11-28 09:56:42.606404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:03.747 [2024-11-28 09:56:42.608756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.747 [2024-11-28 09:56:42.608792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:03.747 [2024-11-28 09:56:42.608801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.339 ms 00:24:03.747 [2024-11-28 09:56:42.608807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.747 [2024-11-28 09:56:42.608986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.747 [2024-11-28 09:56:42.608995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:03.747 [2024-11-28 09:56:42.609002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:24:03.747 [2024-11-28 09:56:42.609008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.747 [2024-11-28 09:56:42.612265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.747 [2024-11-28 09:56:42.612369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:03.747 [2024-11-28 09:56:42.612420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:24:03.747 [2024-11-28 09:56:42.612447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.747 [2024-11-28 09:56:42.617173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.747 [2024-11-28 09:56:42.617273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:03.747 [2024-11-28 09:56:42.617320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:24:03.747 [2024-11-28 09:56:42.617338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.637669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.637712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.009 [2024-11-28 09:56:42.637722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.276 ms 00:24:04.009 [2024-11-28 09:56:42.637728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.650187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.650216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.009 [2024-11-28 09:56:42.650225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.427 ms 00:24:04.009 [2024-11-28 09:56:42.650233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.650328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.650336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.009 [2024-11-28 09:56:42.650343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:04.009 [2024-11-28 09:56:42.650351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.669138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.669274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.009 [2024-11-28 09:56:42.669288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.776 ms 00:24:04.009 [2024-11-28 09:56:42.669294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.687673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.687700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.009 [2024-11-28 09:56:42.687708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.356 ms 00:24:04.009 [2024-11-28 09:56:42.687714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.705248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.705274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.009 [2024-11-28 09:56:42.705282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:24:04.009 [2024-11-28 09:56:42.705287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.723057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.009 [2024-11-28 09:56:42.723083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.009 [2024-11-28 09:56:42.723090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.711 ms 00:24:04.009 [2024-11-28 09:56:42.723096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.009 [2024-11-28 09:56:42.723122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.010 [2024-11-28 09:56:42.723139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.010 [2024-11-28 09:56:42.723563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.011 [2024-11-28 09:56:42.723748] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.011 [2024-11-28 09:56:42.723754] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9e2861e-73d9-47cb-81fe-9ef994e3fa71 00:24:04.011 [2024-11-28 09:56:42.723760] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.011 [2024-11-28 09:56:42.723766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.011 [2024-11-28 09:56:42.723772] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.011 [2024-11-28 09:56:42.723779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.011 [2024-11-28 09:56:42.723789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.011 [2024-11-28 09:56:42.723795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.011 [2024-11-28 09:56:42.723801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.011 [2024-11-28 09:56:42.723806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.011 [2024-11-28 09:56:42.723810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.011 [2024-11-28 09:56:42.723816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.011 [2024-11-28 09:56:42.723822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.011 [2024-11-28 09:56:42.723828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:24:04.011 [2024-11-28 09:56:42.723836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.734103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.011 [2024-11-28 09:56:42.734128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.011 [2024-11-28 09:56:42.734135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.255 ms 00:24:04.011 [2024-11-28 09:56:42.734142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.734437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.011 [2024-11-28 09:56:42.734446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.011 [2024-11-28 09:56:42.734456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:04.011 [2024-11-28 09:56:42.734462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.762182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.011 [2024-11-28 09:56:42.762209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.011 [2024-11-28 09:56:42.762217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.011 [2024-11-28 09:56:42.762223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.762264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.011 [2024-11-28 09:56:42.762271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.011 [2024-11-28 09:56:42.762282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.011 [2024-11-28 09:56:42.762288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.762330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.011 [2024-11-28 09:56:42.762338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.011 [2024-11-28 09:56:42.762344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.011 [2024-11-28 09:56:42.762351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.762363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.011 [2024-11-28 09:56:42.762370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.011 [2024-11-28 09:56:42.762377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.011 [2024-11-28 09:56:42.762385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.825616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.011 [2024-11-28 09:56:42.825650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.011 [2024-11-28 09:56:42.825659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.011 [2024-11-28 09:56:42.825666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.877143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.011 [2024-11-28 09:56:42.877186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.011 [2024-11-28 09:56:42.877199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.011 [2024-11-28 09:56:42.877206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.011 [2024-11-28 09:56:42.877277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.012 [2024-11-28 09:56:42.877286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.012 [2024-11-28 09:56:42.877293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.012 [2024-11-28 09:56:42.877299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.012 [2024-11-28 09:56:42.877328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.012 [2024-11-28 09:56:42.877335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.012 [2024-11-28 09:56:42.877341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.012 [2024-11-28 09:56:42.877348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.012 [2024-11-28 09:56:42.877425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.012 [2024-11-28 09:56:42.877434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.012 [2024-11-28 09:56:42.877440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.012 [2024-11-28 09:56:42.877447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.012 [2024-11-28 09:56:42.877470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.012 [2024-11-28 09:56:42.877478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:04.012 [2024-11-28 09:56:42.877484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.012 [2024-11-28 09:56:42.877490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.012 [2024-11-28 09:56:42.877526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.012 [2024-11-28 09:56:42.877535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.012 [2024-11-28 09:56:42.877543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.012 [2024-11-28 09:56:42.877549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.012 [2024-11-28 09:56:42.877586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.012 [2024-11-28 09:56:42.877593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.012 [2024-11-28 09:56:42.877599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.012 [2024-11-28 09:56:42.877606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.012 [2024-11-28 09:56:42.877726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.394 ms, result 0 00:24:04.584 00:24:04.584 00:24:04.584 09:56:43 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:07.193 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:07.193 09:56:45 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:07.193 [2024-11-28 09:56:45.762229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:07.193 [2024-11-28 09:56:45.762408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79392 ] 00:24:07.193 [2024-11-28 09:56:45.929301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.455 [2024-11-28 09:56:46.075481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.717 [2024-11-28 09:56:46.410941] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.717 [2024-11-28 09:56:46.411031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.717 [2024-11-28 09:56:46.575555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.717 [2024-11-28 09:56:46.575620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.717 [2024-11-28 09:56:46.575637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.717 [2024-11-28 09:56:46.575646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.717 [2024-11-28 09:56:46.575707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.717 [2024-11-28 09:56:46.575721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.717 [2024-11-28 09:56:46.575731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:07.717 [2024-11-28 09:56:46.575739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.717 [2024-11-28 09:56:46.575760] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.717 [2024-11-28 09:56:46.576519] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.717 [2024-11-28 09:56:46.576551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.717 [2024-11-28 09:56:46.576561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.717 [2024-11-28 09:56:46.576572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:24:07.717 [2024-11-28 09:56:46.576580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.717 [2024-11-28 09:56:46.578859] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.717 [2024-11-28 09:56:46.593901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.717 [2024-11-28 09:56:46.593948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.717 [2024-11-28 09:56:46.593962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.044 ms 00:24:07.717 [2024-11-28 09:56:46.593972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.717 [2024-11-28 09:56:46.594060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.717 [2024-11-28 09:56:46.594071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.717 [2024-11-28 09:56:46.594081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:07.717 [2024-11-28 09:56:46.594090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.980 [2024-11-28 09:56:46.605497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.605535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.981 [2024-11-28 09:56:46.605548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.308 ms 00:24:07.981 [2024-11-28 09:56:46.605563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.605650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.605660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.981 [2024-11-28 09:56:46.605669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:07.981 [2024-11-28 09:56:46.605677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.605762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.605776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.981 [2024-11-28 09:56:46.605785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.981 [2024-11-28 09:56:46.605794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.605822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.981 [2024-11-28 09:56:46.610315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.610351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.981 [2024-11-28 09:56:46.610365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.499 ms 00:24:07.981 [2024-11-28 09:56:46.610373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.610410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.610419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.981 [2024-11-28 09:56:46.610429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:07.981 [2024-11-28 09:56:46.610437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.610475] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.981 [2024-11-28 09:56:46.610502] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.981 [2024-11-28 09:56:46.610544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.981 [2024-11-28 09:56:46.610565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.981 [2024-11-28 09:56:46.610682] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.981 [2024-11-28 09:56:46.610693] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.981 [2024-11-28 09:56:46.610705] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.981 [2024-11-28 09:56:46.610716] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.981 [2024-11-28 09:56:46.610729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.981 [2024-11-28 09:56:46.610737] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:07.981 [2024-11-28 09:56:46.610747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.981 [2024-11-28 09:56:46.610761] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.981 [2024-11-28 09:56:46.610770] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.981 [2024-11-28 09:56:46.610780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.610788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.981 [2024-11-28 09:56:46.610797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:07.981 [2024-11-28 09:56:46.610805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.610890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.981 [2024-11-28 09:56:46.610899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.981 [2024-11-28 09:56:46.610907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:07.981 [2024-11-28 09:56:46.610915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.981 [2024-11-28 09:56:46.611027] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.981 [2024-11-28 09:56:46.611051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.981 [2024-11-28 09:56:46.611061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.981 [2024-11-28 09:56:46.611091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.981 [2024-11-28 09:56:46.611115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.981 [2024-11-28 09:56:46.611130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.981 [2024-11-28 09:56:46.611137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:07.981 [2024-11-28 09:56:46.611144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.981 [2024-11-28 09:56:46.611180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.981 [2024-11-28 09:56:46.611187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:07.981 [2024-11-28 09:56:46.611194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.981 [2024-11-28 09:56:46.611209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.981 [2024-11-28 09:56:46.611232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.981 [2024-11-28 09:56:46.611252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.981 [2024-11-28 09:56:46.611273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.981 [2024-11-28 09:56:46.611296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.981 [2024-11-28 09:56:46.611316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.981 [2024-11-28 09:56:46.611330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.981 [2024-11-28 09:56:46.611336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:07.981 [2024-11-28 09:56:46.611344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.981 [2024-11-28 09:56:46.611356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.981 [2024-11-28 09:56:46.611364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:07.981 [2024-11-28 09:56:46.611371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.981 [2024-11-28 09:56:46.611387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:07.981 [2024-11-28 09:56:46.611393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611403] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.981 [2024-11-28 09:56:46.611412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.981 [2024-11-28 09:56:46.611420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.981 [2024-11-28 09:56:46.611436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.981 [2024-11-28 09:56:46.611443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.981 [2024-11-28 09:56:46.611449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.981 [2024-11-28 09:56:46.611457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.981 [2024-11-28 09:56:46.611464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.981 [2024-11-28 09:56:46.611471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.981 [2024-11-28 09:56:46.611479] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.981 [2024-11-28 09:56:46.611488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.981 [2024-11-28 09:56:46.611500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:07.981 [2024-11-28 09:56:46.611508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:07.981 [2024-11-28 09:56:46.611516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:07.981 [2024-11-28 09:56:46.611524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:07.981 [2024-11-28 09:56:46.611531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:07.982 [2024-11-28 09:56:46.611538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:07.982 [2024-11-28 09:56:46.611546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:07.982 [2024-11-28 09:56:46.611553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:07.982 [2024-11-28 09:56:46.611560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:07.982 [2024-11-28 09:56:46.611568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:07.982 [2024-11-28 09:56:46.611577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:07.982 [2024-11-28 09:56:46.611584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:07.982 [2024-11-28 09:56:46.611592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:07.982 [2024-11-28 09:56:46.611600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:07.982 [2024-11-28 09:56:46.611609] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.982 [2024-11-28 09:56:46.611620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.982 [2024-11-28 09:56:46.611630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.982 [2024-11-28 09:56:46.611637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.982 [2024-11-28 09:56:46.611645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.982 [2024-11-28 09:56:46.611652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.982 [2024-11-28 09:56:46.611660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.611667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.982 [2024-11-28 09:56:46.611675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:24:07.982 [2024-11-28 09:56:46.611683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.649664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.649719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.982 [2024-11-28 09:56:46.649732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.935 ms 00:24:07.982 [2024-11-28 09:56:46.649746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.649840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.649851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.982 [2024-11-28 09:56:46.649861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:07.982 [2024-11-28 09:56:46.649870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.698890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.698941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.982 [2024-11-28 09:56:46.698956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.956 ms 00:24:07.982 [2024-11-28 09:56:46.698965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.699018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.699029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.982 [2024-11-28 09:56:46.699044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:07.982 [2024-11-28 09:56:46.699052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.699805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.699842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.982 [2024-11-28 09:56:46.699854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:24:07.982 [2024-11-28 09:56:46.699863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.700040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.700052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.982 [2024-11-28 09:56:46.700069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:07.982 [2024-11-28 09:56:46.700077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.718111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.718178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.982 [2024-11-28 09:56:46.718190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.012 ms 00:24:07.982 [2024-11-28 09:56:46.718199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.733644] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:07.982 [2024-11-28 09:56:46.733686] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.982 [2024-11-28 09:56:46.733711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.733721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.982 [2024-11-28 09:56:46.733732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.396 ms 00:24:07.982 [2024-11-28 09:56:46.733741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.759913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.759958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.982 [2024-11-28 09:56:46.759971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.115 ms 00:24:07.982 [2024-11-28 09:56:46.759980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.773038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.773079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.982 [2024-11-28 09:56:46.773091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.985 ms 00:24:07.982 [2024-11-28 09:56:46.773099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.785758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.785811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.982 [2024-11-28 09:56:46.785823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.610 ms 00:24:07.982 [2024-11-28 09:56:46.785832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.982 [2024-11-28 09:56:46.786515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.982 [2024-11-28 09:56:46.786544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.982 [2024-11-28 09:56:46.786560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:24:07.982 [2024-11-28 09:56:46.786570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.858766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.858823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.244 [2024-11-28 09:56:46.858847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.174 ms 00:24:08.244 [2024-11-28 09:56:46.858857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.871496] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:08.244 [2024-11-28 09:56:46.875730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.875768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.244 [2024-11-28 09:56:46.875783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.811 ms 00:24:08.244 [2024-11-28 09:56:46.875794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.875894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.875907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.244 [2024-11-28 09:56:46.875920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:08.244 [2024-11-28 09:56:46.875929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.876011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.876023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.244 [2024-11-28 09:56:46.876034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:08.244 [2024-11-28 09:56:46.876042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.876066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.876075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.244 [2024-11-28 09:56:46.876084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:08.244 [2024-11-28 09:56:46.876092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.876137] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.244 [2024-11-28 09:56:46.876149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.876179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.244 [2024-11-28 09:56:46.876189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:08.244 [2024-11-28 09:56:46.876197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.902335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.902382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.244 [2024-11-28 09:56:46.902404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.116 ms 00:24:08.244 [2024-11-28 09:56:46.902414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.902508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.244 [2024-11-28 09:56:46.902519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.244 [2024-11-28 09:56:46.902530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:08.244 [2024-11-28 09:56:46.902539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.244 [2024-11-28 09:56:46.904018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.901 ms, result 0 00:24:09.188  [2024-11-28T09:56:49.014Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-28T09:56:49.959Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-28T09:56:51.348Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-28T09:56:51.919Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-28T09:56:53.306Z] Copying: 54/1024 [MB] (11 MBps) [2024-11-28T09:56:54.250Z] Copying: 65/1024 [MB] (10 MBps) [2024-11-28T09:56:55.195Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-28T09:56:56.141Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-28T09:56:57.086Z] Copying: 99/1024 [MB] (11 MBps) [2024-11-28T09:56:58.031Z] Copying: 110/1024 [MB] (11 MBps) [2024-11-28T09:56:58.974Z] Copying: 121/1024 [MB] (11 MBps) [2024-11-28T09:56:59.917Z] Copying: 133/1024 [MB] (11 MBps) [2024-11-28T09:57:01.306Z] Copying: 144/1024 [MB] (11 MBps) [2024-11-28T09:57:02.252Z] Copying: 155/1024 [MB] (11 MBps) [2024-11-28T09:57:03.194Z] Copying: 167/1024 [MB] (11 MBps) [2024-11-28T09:57:04.140Z] Copying: 179/1024 [MB] (11 MBps) [2024-11-28T09:57:05.085Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-28T09:57:06.028Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-28T09:57:06.974Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-28T09:57:07.921Z] Copying: 222/1024 [MB] (10 MBps) [2024-11-28T09:57:09.309Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-28T09:57:10.254Z] Copying: 244/1024 [MB] (11 MBps) [2024-11-28T09:57:11.198Z] Copying: 255/1024 [MB] (11 MBps) [2024-11-28T09:57:12.144Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-28T09:57:13.090Z] Copying: 283440/1048576 [kB] (10008 kBps) [2024-11-28T09:57:14.035Z] Copying: 293472/1048576 [kB] (10032 kBps) [2024-11-28T09:57:14.981Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-28T09:57:15.943Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-28T09:57:17.332Z] Copying: 318/1024 [MB] (11 MBps) [2024-11-28T09:57:18.290Z] Copying: 330/1024 [MB] (11 MBps) [2024-11-28T09:57:18.941Z] Copying: 342/1024 [MB] (11 MBps) [2024-11-28T09:57:20.323Z] Copying: 353/1024 [MB] (11 MBps) [2024-11-28T09:57:21.262Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-28T09:57:22.202Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-28T09:57:23.142Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-28T09:57:24.081Z] Copying: 398/1024 [MB] (10 MBps) [2024-11-28T09:57:25.022Z] Copying: 409/1024 [MB] (11 MBps) [2024-11-28T09:57:25.963Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-28T09:57:27.347Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-28T09:57:27.917Z] Copying: 442/1024 [MB] (11 MBps) [2024-11-28T09:57:29.302Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-28T09:57:30.246Z] Copying: 463/1024 [MB] (11 MBps) [2024-11-28T09:57:31.187Z] Copying: 474/1024 [MB] (11 MBps) [2024-11-28T09:57:32.128Z] Copying: 486/1024 [MB] (11 MBps) [2024-11-28T09:57:33.072Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-28T09:57:34.013Z] Copying: 518768/1048576 [kB] (10008 kBps) [2024-11-28T09:57:34.954Z] Copying: 517/1024 [MB] (11 MBps) [2024-11-28T09:57:36.337Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-28T09:57:37.278Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-28T09:57:38.235Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-28T09:57:39.177Z] Copying: 561/1024 [MB] (11 MBps) [2024-11-28T09:57:40.119Z] Copying: 572/1024 [MB] (11 MBps) [2024-11-28T09:57:41.063Z] Copying: 596912/1048576 [kB] (10192 kBps) [2024-11-28T09:57:42.006Z] Copying: 594/1024 [MB] (11 MBps) [2024-11-28T09:57:42.948Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-28T09:57:44.333Z] Copying: 616/1024 [MB] (11 MBps) [2024-11-28T09:57:45.273Z] Copying: 627/1024 [MB] (11 MBps) [2024-11-28T09:57:46.214Z] Copying: 639/1024 [MB] (11 MBps) [2024-11-28T09:57:47.160Z] Copying: 650/1024 [MB] (11 MBps) [2024-11-28T09:57:48.106Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-28T09:57:49.052Z] Copying: 673/1024 [MB] (11 MBps) [2024-11-28T09:57:49.997Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-28T09:57:50.942Z] Copying: 695/1024 [MB] (11 MBps) [2024-11-28T09:57:52.330Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-28T09:57:52.939Z] Copying: 717/1024 [MB] (11 MBps) [2024-11-28T09:57:53.930Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-28T09:57:55.320Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-28T09:57:56.263Z] Copying: 749/1024 [MB] (10 MBps) [2024-11-28T09:57:57.206Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-28T09:57:58.152Z] Copying: 771/1024 [MB] (11 MBps) [2024-11-28T09:57:59.097Z] Copying: 783/1024 [MB] (11 MBps) [2024-11-28T09:58:00.043Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-28T09:58:00.985Z] Copying: 805/1024 [MB] (11 MBps) [2024-11-28T09:58:01.928Z] Copying: 817/1024 [MB] (11 MBps) [2024-11-28T09:58:03.314Z] Copying: 828/1024 [MB] (11 MBps) [2024-11-28T09:58:04.258Z] Copying: 839/1024 [MB] (11 MBps) [2024-11-28T09:58:05.217Z] Copying: 851/1024 [MB] (11 MBps) [2024-11-28T09:58:06.163Z] Copying: 862/1024 [MB] (11 MBps) [2024-11-28T09:58:07.106Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-28T09:58:08.049Z] Copying: 885/1024 [MB] (11 MBps) [2024-11-28T09:58:08.992Z] Copying: 896/1024 [MB] (11 MBps) [2024-11-28T09:58:09.936Z] Copying: 907/1024 [MB] (10 MBps) [2024-11-28T09:58:11.322Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-28T09:58:12.264Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-28T09:58:13.207Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-28T09:58:14.152Z] Copying: 951/1024 [MB] (11 MBps) [2024-11-28T09:58:15.095Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-28T09:58:16.039Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-28T09:58:16.983Z] Copying: 985/1024 [MB] (11 MBps) [2024-11-28T09:58:17.926Z] Copying: 996/1024 [MB] (11 MBps) [2024-11-28T09:58:19.313Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-28T09:58:20.256Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-28T09:58:20.256Z] Copying: 1048568/1048576 [kB] (5304 kBps) [2024-11-28T09:58:20.256Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-28 09:58:19.939740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:19.939813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.376 [2024-11-28 09:58:19.939838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:41.376 [2024-11-28 09:58:19.939847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.376 [2024-11-28 09:58:19.942054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:41.376 [2024-11-28 09:58:19.946859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:19.946899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.376 [2024-11-28 09:58:19.946909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:25:41.376 [2024-11-28 09:58:19.946917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.376 [2024-11-28 09:58:19.961038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:19.961075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.376 [2024-11-28 09:58:19.961087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.091 ms 00:25:41.376 [2024-11-28 09:58:19.961101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.376 [2024-11-28 09:58:19.981907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:19.981955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.376 [2024-11-28 09:58:19.981965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.791 ms 00:25:41.376 [2024-11-28 09:58:19.981974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.376 [2024-11-28 09:58:19.988606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:19.988635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.376 [2024-11-28 09:58:19.988645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:25:41.376 [2024-11-28 09:58:19.988658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.376 [2024-11-28 09:58:20.014199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:20.014238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.376 [2024-11-28 09:58:20.014250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.501 ms 00:25:41.376 [2024-11-28 09:58:20.014258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.376 [2024-11-28 09:58:20.029444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.376 [2024-11-28 09:58:20.029485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.376 [2024-11-28 09:58:20.029497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.148 ms 00:25:41.376 [2024-11-28 09:58:20.029505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.638 [2024-11-28 09:58:20.285406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.638 [2024-11-28 09:58:20.285510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.638 [2024-11-28 09:58:20.285528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 255.853 ms 00:25:41.638 [2024-11-28 09:58:20.285540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.638 [2024-11-28 09:58:20.313520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.638 [2024-11-28 09:58:20.313577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.638 [2024-11-28 09:58:20.313593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.961 ms 00:25:41.638 [2024-11-28 09:58:20.313603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.638 [2024-11-28 09:58:20.339541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.638 [2024-11-28 09:58:20.339589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.638 [2024-11-28 09:58:20.339602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.883 ms 00:25:41.638 [2024-11-28 09:58:20.339611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.638 [2024-11-28 09:58:20.364366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.638 [2024-11-28 09:58:20.364414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.638 [2024-11-28 09:58:20.364427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.707 ms 00:25:41.638 [2024-11-28 09:58:20.364436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.638 [2024-11-28 09:58:20.389416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.638 [2024-11-28 09:58:20.389462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.638 [2024-11-28 09:58:20.389475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.899 ms 00:25:41.639 [2024-11-28 09:58:20.389484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.639 [2024-11-28 09:58:20.389532] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.639 [2024-11-28 09:58:20.389552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 89856 / 261120 wr_cnt: 1 state: open 00:25:41.639 [2024-11-28 09:58:20.389565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.389995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.639 [2024-11-28 09:58:20.390352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.640 [2024-11-28 09:58:20.390485] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.640 [2024-11-28 09:58:20.390495] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9e2861e-73d9-47cb-81fe-9ef994e3fa71 00:25:41.640 [2024-11-28 09:58:20.390504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 89856 00:25:41.640 [2024-11-28 09:58:20.390511] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 90816 00:25:41.640 [2024-11-28 09:58:20.390519] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 89856 00:25:41.640 [2024-11-28 09:58:20.390529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0107 00:25:41.640 [2024-11-28 09:58:20.390553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.640 [2024-11-28 09:58:20.390561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.640 [2024-11-28 09:58:20.390570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.640 [2024-11-28 09:58:20.390578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.640 [2024-11-28 09:58:20.390585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.640 [2024-11-28 09:58:20.390594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.640 [2024-11-28 09:58:20.390604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.640 [2024-11-28 09:58:20.390615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:25:41.640 [2024-11-28 09:58:20.390623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.640 [2024-11-28 09:58:20.405141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.640 [2024-11-28 09:58:20.405206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.640 [2024-11-28 09:58:20.405226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.499 ms 00:25:41.640 [2024-11-28 09:58:20.405237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.640 [2024-11-28 09:58:20.405670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.640 [2024-11-28 09:58:20.405693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.640 [2024-11-28 09:58:20.405704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:25:41.640 [2024-11-28 09:58:20.405712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.640 [2024-11-28 09:58:20.445381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.640 [2024-11-28 09:58:20.445432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.640 [2024-11-28 09:58:20.445445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.640 [2024-11-28 09:58:20.445455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.640 [2024-11-28 09:58:20.445535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.640 [2024-11-28 09:58:20.445546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.640 [2024-11-28 09:58:20.445555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.640 [2024-11-28 09:58:20.445565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.640 [2024-11-28 09:58:20.445655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.640 [2024-11-28 09:58:20.445674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.640 [2024-11-28 09:58:20.445684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.640 [2024-11-28 09:58:20.445693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.640 [2024-11-28 09:58:20.445711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.640 [2024-11-28 09:58:20.445721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.640 [2024-11-28 09:58:20.445729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.640 [2024-11-28 09:58:20.445740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.525099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.525158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.901 [2024-11-28 09:58:20.525170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.525177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.901 [2024-11-28 09:58:20.580123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.901 [2024-11-28 09:58:20.580203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.901 [2024-11-28 09:58:20.580285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.901 [2024-11-28 09:58:20.580569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:41.901 [2024-11-28 09:58:20.580624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.901 [2024-11-28 09:58:20.580684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:41.901 [2024-11-28 09:58:20.580749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.901 [2024-11-28 09:58:20.580756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:41.901 [2024-11-28 09:58:20.580763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.901 [2024-11-28 09:58:20.580878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.027 ms, result 0 00:25:42.843 00:25:42.843 00:25:42.843 09:58:21 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:42.843 [2024-11-28 09:58:21.626723] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:42.843 [2024-11-28 09:58:21.626847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80377 ] 00:25:43.104 [2024-11-28 09:58:21.783437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.104 [2024-11-28 09:58:21.870318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.364 [2024-11-28 09:58:22.102335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.364 [2024-11-28 09:58:22.102394] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:43.626 [2024-11-28 09:58:22.257995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.626 [2024-11-28 09:58:22.258034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:43.626 [2024-11-28 09:58:22.258045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.626 [2024-11-28 09:58:22.258052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.626 [2024-11-28 09:58:22.258091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.626 [2024-11-28 09:58:22.258101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.626 [2024-11-28 09:58:22.258108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:43.626 [2024-11-28 09:58:22.258114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.626 [2024-11-28 09:58:22.258127] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:43.626 [2024-11-28 09:58:22.258702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:43.626 [2024-11-28 09:58:22.258721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.626 [2024-11-28 09:58:22.258728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.626 [2024-11-28 09:58:22.258735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:25:43.626 [2024-11-28 09:58:22.258741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.626 [2024-11-28 09:58:22.259992] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:43.626 [2024-11-28 09:58:22.270537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.626 [2024-11-28 09:58:22.270566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:43.626 [2024-11-28 09:58:22.270575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.546 ms 00:25:43.627 [2024-11-28 09:58:22.270582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.270630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.270638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:43.627 [2024-11-28 09:58:22.270645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:43.627 [2024-11-28 09:58:22.270650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.277020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.277044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.627 [2024-11-28 09:58:22.277052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.327 ms 00:25:43.627 [2024-11-28 09:58:22.277061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.277118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.277125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.627 [2024-11-28 09:58:22.277132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:43.627 [2024-11-28 09:58:22.277138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.277185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.277194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:43.627 [2024-11-28 09:58:22.277201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:43.627 [2024-11-28 09:58:22.277208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.277227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.627 [2024-11-28 09:58:22.280249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.280274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.627 [2024-11-28 09:58:22.280283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:25:43.627 [2024-11-28 09:58:22.280289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.280314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.280320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:43.627 [2024-11-28 09:58:22.280327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:43.627 [2024-11-28 09:58:22.280332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.280348] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:43.627 [2024-11-28 09:58:22.280366] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:43.627 [2024-11-28 09:58:22.280395] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:43.627 [2024-11-28 09:58:22.280409] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:43.627 [2024-11-28 09:58:22.280493] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:43.627 [2024-11-28 09:58:22.280502] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:43.627 [2024-11-28 09:58:22.280511] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:43.627 [2024-11-28 09:58:22.280519] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280527] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280533] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:43.627 [2024-11-28 09:58:22.280539] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:43.627 [2024-11-28 09:58:22.280547] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:43.627 [2024-11-28 09:58:22.280554] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:43.627 [2024-11-28 09:58:22.280560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.280566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:43.627 [2024-11-28 09:58:22.280572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:25:43.627 [2024-11-28 09:58:22.280577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.280641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.627 [2024-11-28 09:58:22.280648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:43.627 [2024-11-28 09:58:22.280654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:43.627 [2024-11-28 09:58:22.280660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.627 [2024-11-28 09:58:22.280739] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:43.627 [2024-11-28 09:58:22.280754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:43.627 [2024-11-28 09:58:22.280761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:43.627 [2024-11-28 09:58:22.280779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:43.627 [2024-11-28 09:58:22.280796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.627 [2024-11-28 09:58:22.280809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:43.627 [2024-11-28 09:58:22.280816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:43.627 [2024-11-28 09:58:22.280822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:43.627 [2024-11-28 09:58:22.280832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:43.627 [2024-11-28 09:58:22.280838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:43.627 [2024-11-28 09:58:22.280843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:43.627 [2024-11-28 09:58:22.280855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:43.627 [2024-11-28 09:58:22.280872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:43.627 [2024-11-28 09:58:22.280888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:43.627 [2024-11-28 09:58:22.280902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:43.627 [2024-11-28 09:58:22.280918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:43.627 [2024-11-28 09:58:22.280928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:43.627 [2024-11-28 09:58:22.280933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.627 [2024-11-28 09:58:22.280943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:43.627 [2024-11-28 09:58:22.280948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:43.627 [2024-11-28 09:58:22.280953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:43.627 [2024-11-28 09:58:22.280958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:43.627 [2024-11-28 09:58:22.280964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:43.627 [2024-11-28 09:58:22.280969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:43.627 [2024-11-28 09:58:22.280978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:43.627 [2024-11-28 09:58:22.280984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.280989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:43.627 [2024-11-28 09:58:22.280996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:43.627 [2024-11-28 09:58:22.281002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:43.627 [2024-11-28 09:58:22.281007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:43.627 [2024-11-28 09:58:22.281013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:43.627 [2024-11-28 09:58:22.281018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:43.627 [2024-11-28 09:58:22.281023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:43.627 [2024-11-28 09:58:22.281028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:43.627 [2024-11-28 09:58:22.281033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:43.627 [2024-11-28 09:58:22.281038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:43.627 [2024-11-28 09:58:22.281046] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:43.627 [2024-11-28 09:58:22.281054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.627 [2024-11-28 09:58:22.281062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:43.628 [2024-11-28 09:58:22.281068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:43.628 [2024-11-28 09:58:22.281073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:43.628 [2024-11-28 09:58:22.281078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:43.628 [2024-11-28 09:58:22.281083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:43.628 [2024-11-28 09:58:22.281088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:43.628 [2024-11-28 09:58:22.281094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:43.628 [2024-11-28 09:58:22.281100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:43.628 [2024-11-28 09:58:22.281106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:43.628 [2024-11-28 09:58:22.281111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:43.628 [2024-11-28 09:58:22.281116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:43.628 [2024-11-28 09:58:22.281122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:43.628 [2024-11-28 09:58:22.281127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:43.628 [2024-11-28 09:58:22.281132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:43.628 [2024-11-28 09:58:22.281138] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:43.628 [2024-11-28 09:58:22.281144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:43.628 [2024-11-28 09:58:22.281150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:43.628 [2024-11-28 09:58:22.281165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:43.628 [2024-11-28 09:58:22.281171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:43.628 [2024-11-28 09:58:22.281182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:43.628 [2024-11-28 09:58:22.281190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.281196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:43.628 [2024-11-28 09:58:22.281202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:25:43.628 [2024-11-28 09:58:22.281208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.305555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.305585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.628 [2024-11-28 09:58:22.305594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.303 ms 00:25:43.628 [2024-11-28 09:58:22.305603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.305670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.305677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.628 [2024-11-28 09:58:22.305683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:43.628 [2024-11-28 09:58:22.305689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.344166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.344200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.628 [2024-11-28 09:58:22.344209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.437 ms 00:25:43.628 [2024-11-28 09:58:22.344216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.344250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.344258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.628 [2024-11-28 09:58:22.344268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:43.628 [2024-11-28 09:58:22.344274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.344687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.344711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.628 [2024-11-28 09:58:22.344720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:25:43.628 [2024-11-28 09:58:22.344725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.344835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.344842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.628 [2024-11-28 09:58:22.344849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:43.628 [2024-11-28 09:58:22.344858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.356758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.356783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.628 [2024-11-28 09:58:22.356794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.883 ms 00:25:43.628 [2024-11-28 09:58:22.356800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.367495] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:43.628 [2024-11-28 09:58:22.367524] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.628 [2024-11-28 09:58:22.367535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.367542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.628 [2024-11-28 09:58:22.367549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.645 ms 00:25:43.628 [2024-11-28 09:58:22.367555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.386167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.386194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.628 [2024-11-28 09:58:22.386204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.580 ms 00:25:43.628 [2024-11-28 09:58:22.386211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.395562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.395588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:43.628 [2024-11-28 09:58:22.395596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.321 ms 00:25:43.628 [2024-11-28 09:58:22.395602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.404783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.404809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:43.628 [2024-11-28 09:58:22.404817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.155 ms 00:25:43.628 [2024-11-28 09:58:22.404822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.405303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.405321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.628 [2024-11-28 09:58:22.405331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:25:43.628 [2024-11-28 09:58:22.405337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.453969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.454003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:43.628 [2024-11-28 09:58:22.454018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.619 ms 00:25:43.628 [2024-11-28 09:58:22.454026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.462304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:43.628 [2024-11-28 09:58:22.464627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.464650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.628 [2024-11-28 09:58:22.464659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.568 ms 00:25:43.628 [2024-11-28 09:58:22.464667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.464722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.464730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:43.628 [2024-11-28 09:58:22.464740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:43.628 [2024-11-28 09:58:22.464745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.465964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.465992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.628 [2024-11-28 09:58:22.466000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:25:43.628 [2024-11-28 09:58:22.466006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.466028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.466035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.628 [2024-11-28 09:58:22.466041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.628 [2024-11-28 09:58:22.466048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.466083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:43.628 [2024-11-28 09:58:22.466091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.466097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:43.628 [2024-11-28 09:58:22.466104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:43.628 [2024-11-28 09:58:22.466111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.484967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.484991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.628 [2024-11-28 09:58:22.485004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.842 ms 00:25:43.628 [2024-11-28 09:58:22.485011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.628 [2024-11-28 09:58:22.485070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.628 [2024-11-28 09:58:22.485079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.629 [2024-11-28 09:58:22.485086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:43.629 [2024-11-28 09:58:22.485093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.629 [2024-11-28 09:58:22.485978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.594 ms, result 0 00:25:45.015  [2024-11-28T09:58:24.836Z] Copying: 9092/1048576 [kB] (9092 kBps) [2024-11-28T09:58:25.779Z] Copying: 21/1024 [MB] (12 MBps) [2024-11-28T09:58:26.723Z] Copying: 32/1024 [MB] (11 MBps) [2024-11-28T09:58:27.731Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-28T09:58:28.675Z] Copying: 55/1024 [MB] (11 MBps) [2024-11-28T09:58:29.630Z] Copying: 66/1024 [MB] (10 MBps) [2024-11-28T09:58:31.018Z] Copying: 77/1024 [MB] (11 MBps) [2024-11-28T09:58:31.964Z] Copying: 88/1024 [MB] (11 MBps) [2024-11-28T09:58:32.910Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-28T09:58:33.856Z] Copying: 111/1024 [MB] (11 MBps) [2024-11-28T09:58:34.801Z] Copying: 123/1024 [MB] (11 MBps) [2024-11-28T09:58:35.743Z] Copying: 135/1024 [MB] (11 MBps) [2024-11-28T09:58:36.684Z] Copying: 146/1024 [MB] (11 MBps) [2024-11-28T09:58:37.629Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-28T09:58:39.018Z] Copying: 169/1024 [MB] (11 MBps) [2024-11-28T09:58:39.963Z] Copying: 181/1024 [MB] (11 MBps) [2024-11-28T09:58:40.908Z] Copying: 193/1024 [MB] (11 MBps) [2024-11-28T09:58:41.853Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-28T09:58:42.798Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-28T09:58:43.743Z] Copying: 227/1024 [MB] (11 MBps) [2024-11-28T09:58:44.688Z] Copying: 238/1024 [MB] (10 MBps) [2024-11-28T09:58:45.631Z] Copying: 249/1024 [MB] (11 MBps) [2024-11-28T09:58:47.021Z] Copying: 260/1024 [MB] (11 MBps) [2024-11-28T09:58:47.963Z] Copying: 272/1024 [MB] (11 MBps) [2024-11-28T09:58:48.908Z] Copying: 283/1024 [MB] (11 MBps) [2024-11-28T09:58:49.852Z] Copying: 294/1024 [MB] (11 MBps) [2024-11-28T09:58:50.798Z] Copying: 306/1024 [MB] (11 MBps) [2024-11-28T09:58:51.742Z] Copying: 318/1024 [MB] (11 MBps) [2024-11-28T09:58:52.686Z] Copying: 330/1024 [MB] (11 MBps) [2024-11-28T09:58:53.632Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-28T09:58:55.020Z] Copying: 353/1024 [MB] (11 MBps) [2024-11-28T09:58:55.970Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-28T09:58:56.916Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-28T09:58:57.862Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-28T09:58:58.808Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-28T09:58:59.753Z] Copying: 412/1024 [MB] (12 MBps) [2024-11-28T09:59:00.698Z] Copying: 424/1024 [MB] (12 MBps) [2024-11-28T09:59:01.661Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-28T09:59:02.657Z] Copying: 447/1024 [MB] (11 MBps) [2024-11-28T09:59:04.045Z] Copying: 459/1024 [MB] (11 MBps) [2024-11-28T09:59:04.991Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-28T09:59:05.941Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-28T09:59:06.886Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-28T09:59:07.831Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-28T09:59:08.776Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-28T09:59:09.719Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-28T09:59:10.663Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-28T09:59:12.049Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-28T09:59:12.993Z] Copying: 561/1024 [MB] (11 MBps) [2024-11-28T09:59:13.937Z] Copying: 571/1024 [MB] (10 MBps) [2024-11-28T09:59:14.881Z] Copying: 583/1024 [MB] (11 MBps) [2024-11-28T09:59:15.824Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-28T09:59:16.768Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-28T09:59:17.717Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-28T09:59:18.660Z] Copying: 629/1024 [MB] (10 MBps) [2024-11-28T09:59:20.048Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-28T09:59:20.990Z] Copying: 652/1024 [MB] (10 MBps) [2024-11-28T09:59:21.933Z] Copying: 663/1024 [MB] (11 MBps) [2024-11-28T09:59:22.877Z] Copying: 675/1024 [MB] (11 MBps) [2024-11-28T09:59:23.821Z] Copying: 686/1024 [MB] (11 MBps) [2024-11-28T09:59:24.764Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-28T09:59:25.714Z] Copying: 707/1024 [MB] (11 MBps) [2024-11-28T09:59:26.652Z] Copying: 719/1024 [MB] (11 MBps) [2024-11-28T09:59:28.038Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-28T09:59:28.980Z] Copying: 742/1024 [MB] (11 MBps) [2024-11-28T09:59:29.926Z] Copying: 754/1024 [MB] (11 MBps) [2024-11-28T09:59:30.872Z] Copying: 765/1024 [MB] (11 MBps) [2024-11-28T09:59:31.818Z] Copying: 776/1024 [MB] (11 MBps) [2024-11-28T09:59:32.764Z] Copying: 788/1024 [MB] (12 MBps) [2024-11-28T09:59:33.713Z] Copying: 800/1024 [MB] (11 MBps) [2024-11-28T09:59:34.659Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-28T09:59:36.050Z] Copying: 823/1024 [MB] (11 MBps) [2024-11-28T09:59:36.667Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-28T09:59:37.630Z] Copying: 845/1024 [MB] (11 MBps) [2024-11-28T09:59:39.016Z] Copying: 856/1024 [MB] (10 MBps) [2024-11-28T09:59:39.961Z] Copying: 867/1024 [MB] (11 MBps) [2024-11-28T09:59:40.907Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-28T09:59:41.852Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-28T09:59:42.797Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-28T09:59:43.741Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-28T09:59:44.684Z] Copying: 924/1024 [MB] (11 MBps) [2024-11-28T09:59:46.069Z] Copying: 935/1024 [MB] (11 MBps) [2024-11-28T09:59:46.643Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-28T09:59:48.035Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-28T09:59:48.980Z] Copying: 970/1024 [MB] (12 MBps) [2024-11-28T09:59:49.926Z] Copying: 981/1024 [MB] (11 MBps) [2024-11-28T09:59:50.870Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-28T09:59:51.817Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-28T09:59:52.391Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-28T09:59:52.652Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-28 09:59:52.519795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.519866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:13.772 [2024-11-28 09:59:52.519891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:13.772 [2024-11-28 09:59:52.519898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.772 [2024-11-28 09:59:52.519916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:13.772 [2024-11-28 09:59:52.522917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.522953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:13.772 [2024-11-28 09:59:52.522965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:27:13.772 [2024-11-28 09:59:52.522974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.772 [2024-11-28 09:59:52.523208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.523225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:13.772 [2024-11-28 09:59:52.523235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:27:13.772 [2024-11-28 09:59:52.523248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.772 [2024-11-28 09:59:52.529688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.529736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:13.772 [2024-11-28 09:59:52.529747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.423 ms 00:27:13.772 [2024-11-28 09:59:52.529755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.772 [2024-11-28 09:59:52.536115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.536337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:13.772 [2024-11-28 09:59:52.536346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:27:13.772 [2024-11-28 09:59:52.536358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.772 [2024-11-28 09:59:52.556600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.556631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:13.772 [2024-11-28 09:59:52.556641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.192 ms 00:27:13.772 [2024-11-28 09:59:52.556647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.772 [2024-11-28 09:59:52.568606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.772 [2024-11-28 09:59:52.568635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:13.772 [2024-11-28 09:59:52.568645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.930 ms 00:27:13.772 [2024-11-28 09:59:52.568652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.033 [2024-11-28 09:59:52.906630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.033 [2024-11-28 09:59:52.906664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:14.033 [2024-11-28 09:59:52.906675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 337.947 ms 00:27:14.033 [2024-11-28 09:59:52.906681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.295 [2024-11-28 09:59:52.924981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.295 [2024-11-28 09:59:52.925009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:14.295 [2024-11-28 09:59:52.925018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.288 ms 00:27:14.295 [2024-11-28 09:59:52.925024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.295 [2024-11-28 09:59:52.943337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.295 [2024-11-28 09:59:52.943369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:14.295 [2024-11-28 09:59:52.943377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.286 ms 00:27:14.295 [2024-11-28 09:59:52.943383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.295 [2024-11-28 09:59:52.961295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.295 [2024-11-28 09:59:52.961320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:14.295 [2024-11-28 09:59:52.961329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.885 ms 00:27:14.295 [2024-11-28 09:59:52.961334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.295 [2024-11-28 09:59:52.979198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.295 [2024-11-28 09:59:52.979225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:14.295 [2024-11-28 09:59:52.979233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.817 ms 00:27:14.296 [2024-11-28 09:59:52.979238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.296 [2024-11-28 09:59:52.979264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:14.296 [2024-11-28 09:59:52.979276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:14.296 [2024-11-28 09:59:52.979285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:14.296 [2024-11-28 09:59:52.979792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:14.297 [2024-11-28 09:59:52.979891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:14.297 [2024-11-28 09:59:52.979899] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9e2861e-73d9-47cb-81fe-9ef994e3fa71 00:27:14.297 [2024-11-28 09:59:52.979905] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:14.297 [2024-11-28 09:59:52.979911] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 42176 00:27:14.297 [2024-11-28 09:59:52.979917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 41216 00:27:14.297 [2024-11-28 09:59:52.979923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0233 00:27:14.297 [2024-11-28 09:59:52.979932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:14.297 [2024-11-28 09:59:52.979945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:14.297 [2024-11-28 09:59:52.979951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:14.297 [2024-11-28 09:59:52.979957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:14.297 [2024-11-28 09:59:52.979963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:14.297 [2024-11-28 09:59:52.979969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.297 [2024-11-28 09:59:52.979976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:14.297 [2024-11-28 09:59:52.979983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:27:14.297 [2024-11-28 09:59:52.979989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:52.989979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.297 [2024-11-28 09:59:52.990004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:14.297 [2024-11-28 09:59:52.990015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.977 ms 00:27:14.297 [2024-11-28 09:59:52.990022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:52.990324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.297 [2024-11-28 09:59:52.990332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:14.297 [2024-11-28 09:59:52.990339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:27:14.297 [2024-11-28 09:59:52.990344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.017896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.017926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.297 [2024-11-28 09:59:53.017934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.017940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.017985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.017991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.297 [2024-11-28 09:59:53.017997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.018003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.018048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.018056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.297 [2024-11-28 09:59:53.018066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.018072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.018084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.018091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.297 [2024-11-28 09:59:53.018097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.018103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.081432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.081469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.297 [2024-11-28 09:59:53.081480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.081487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:14.297 [2024-11-28 09:59:53.133357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.297 [2024-11-28 09:59:53.133447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.297 [2024-11-28 09:59:53.133500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.297 [2024-11-28 09:59:53.133596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:14.297 [2024-11-28 09:59:53.133643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.297 [2024-11-28 09:59:53.133698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:14.297 [2024-11-28 09:59:53.133751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.297 [2024-11-28 09:59:53.133758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:14.297 [2024-11-28 09:59:53.133765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.297 [2024-11-28 09:59:53.133873] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 614.049 ms, result 0 00:27:14.870 00:27:14.870 00:27:14.870 09:59:53 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:17.425 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:17.425 09:59:55 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:17.425 09:59:55 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:17.425 09:59:55 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:17.425 Process with pid 77351 is not found 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77351 00:27:17.425 09:59:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77351 ']' 00:27:17.425 09:59:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77351 00:27:17.425 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77351) - No such process 00:27:17.425 09:59:56 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77351 is not found' 00:27:17.425 Remove shared memory files 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:17.425 09:59:56 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:17.425 00:27:17.425 real 6m26.962s 00:27:17.425 user 6m14.773s 00:27:17.425 sys 0m11.893s 00:27:17.425 09:59:56 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.425 ************************************ 00:27:17.425 END TEST ftl_restore 00:27:17.425 ************************************ 00:27:17.425 09:59:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:17.425 09:59:56 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:17.425 09:59:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:17.425 09:59:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.425 09:59:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:17.425 ************************************ 00:27:17.425 START TEST ftl_dirty_shutdown 00:27:17.425 ************************************ 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:17.425 * Looking for test storage... 00:27:17.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.425 --rc genhtml_branch_coverage=1 00:27:17.425 --rc genhtml_function_coverage=1 00:27:17.425 --rc genhtml_legend=1 00:27:17.425 --rc geninfo_all_blocks=1 00:27:17.425 --rc geninfo_unexecuted_blocks=1 00:27:17.425 00:27:17.425 ' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.425 --rc genhtml_branch_coverage=1 00:27:17.425 --rc genhtml_function_coverage=1 00:27:17.425 --rc genhtml_legend=1 00:27:17.425 --rc geninfo_all_blocks=1 00:27:17.425 --rc geninfo_unexecuted_blocks=1 00:27:17.425 00:27:17.425 ' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.425 --rc genhtml_branch_coverage=1 00:27:17.425 --rc genhtml_function_coverage=1 00:27:17.425 --rc genhtml_legend=1 00:27:17.425 --rc geninfo_all_blocks=1 00:27:17.425 --rc geninfo_unexecuted_blocks=1 00:27:17.425 00:27:17.425 ' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.425 --rc genhtml_branch_coverage=1 00:27:17.425 --rc genhtml_function_coverage=1 00:27:17.425 --rc genhtml_legend=1 00:27:17.425 --rc geninfo_all_blocks=1 00:27:17.425 --rc geninfo_unexecuted_blocks=1 00:27:17.425 00:27:17.425 ' 00:27:17.425 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:17.687 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:17.687 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.687 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:17.687 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:17.687 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:17.687 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81406 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81406 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81406 ']' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:17.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.688 09:59:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.688 [2024-11-28 09:59:56.402257] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:17.688 [2024-11-28 09:59:56.402372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81406 ] 00:27:17.688 [2024-11-28 09:59:56.559467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.949 [2024-11-28 09:59:56.650858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:18.521 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:18.782 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:18.783 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:19.044 { 00:27:19.044 "name": "nvme0n1", 00:27:19.044 "aliases": [ 00:27:19.044 "7afc2476-e322-472d-b0af-258573a4a4b7" 00:27:19.044 ], 00:27:19.044 "product_name": "NVMe disk", 00:27:19.044 "block_size": 4096, 00:27:19.044 "num_blocks": 1310720, 00:27:19.044 "uuid": "7afc2476-e322-472d-b0af-258573a4a4b7", 00:27:19.044 "numa_id": -1, 00:27:19.044 "assigned_rate_limits": { 00:27:19.044 "rw_ios_per_sec": 0, 00:27:19.044 "rw_mbytes_per_sec": 0, 00:27:19.044 "r_mbytes_per_sec": 0, 00:27:19.044 "w_mbytes_per_sec": 0 00:27:19.044 }, 00:27:19.044 "claimed": true, 00:27:19.044 "claim_type": "read_many_write_one", 00:27:19.044 "zoned": false, 00:27:19.044 "supported_io_types": { 00:27:19.044 "read": true, 00:27:19.044 "write": true, 00:27:19.044 "unmap": true, 00:27:19.044 "flush": true, 00:27:19.044 "reset": true, 00:27:19.044 "nvme_admin": true, 00:27:19.044 "nvme_io": true, 00:27:19.044 "nvme_io_md": false, 00:27:19.044 "write_zeroes": true, 00:27:19.044 "zcopy": false, 00:27:19.044 "get_zone_info": false, 00:27:19.044 "zone_management": false, 00:27:19.044 "zone_append": false, 00:27:19.044 "compare": true, 00:27:19.044 "compare_and_write": false, 00:27:19.044 "abort": true, 00:27:19.044 "seek_hole": false, 00:27:19.044 "seek_data": false, 00:27:19.044 "copy": true, 00:27:19.044 "nvme_iov_md": false 00:27:19.044 }, 00:27:19.044 "driver_specific": { 00:27:19.044 "nvme": [ 00:27:19.044 { 00:27:19.044 "pci_address": "0000:00:11.0", 00:27:19.044 "trid": { 00:27:19.044 "trtype": "PCIe", 00:27:19.044 "traddr": "0000:00:11.0" 00:27:19.044 }, 00:27:19.044 "ctrlr_data": { 00:27:19.044 "cntlid": 0, 00:27:19.044 "vendor_id": "0x1b36", 00:27:19.044 "model_number": "QEMU NVMe Ctrl", 00:27:19.044 "serial_number": "12341", 00:27:19.044 "firmware_revision": "8.0.0", 00:27:19.044 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:19.044 "oacs": { 00:27:19.044 "security": 0, 00:27:19.044 "format": 1, 00:27:19.044 "firmware": 0, 00:27:19.044 "ns_manage": 1 00:27:19.044 }, 00:27:19.044 "multi_ctrlr": false, 00:27:19.044 "ana_reporting": false 00:27:19.044 }, 00:27:19.044 "vs": { 00:27:19.044 "nvme_version": "1.4" 00:27:19.044 }, 00:27:19.044 "ns_data": { 00:27:19.044 "id": 1, 00:27:19.044 "can_share": false 00:27:19.044 } 00:27:19.044 } 00:27:19.044 ], 00:27:19.044 "mp_policy": "active_passive" 00:27:19.044 } 00:27:19.044 } 00:27:19.044 ]' 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:19.044 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:19.305 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fb2aac14-ead1-4b96-811f-424ffc26b932 00:27:19.305 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:19.305 09:59:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb2aac14-ead1-4b96-811f-424ffc26b932 00:27:19.567 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:19.567 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=2633549e-ff47-4487-a328-d6310c1ba48c 00:27:19.567 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2633549e-ff47-4487-a328-d6310c1ba48c 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:19.829 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:20.089 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:20.089 { 00:27:20.089 "name": "11fd1278-8822-4bb4-b7d5-7343b025c04f", 00:27:20.089 "aliases": [ 00:27:20.089 "lvs/nvme0n1p0" 00:27:20.089 ], 00:27:20.089 "product_name": "Logical Volume", 00:27:20.089 "block_size": 4096, 00:27:20.089 "num_blocks": 26476544, 00:27:20.089 "uuid": "11fd1278-8822-4bb4-b7d5-7343b025c04f", 00:27:20.089 "assigned_rate_limits": { 00:27:20.089 "rw_ios_per_sec": 0, 00:27:20.089 "rw_mbytes_per_sec": 0, 00:27:20.089 "r_mbytes_per_sec": 0, 00:27:20.089 "w_mbytes_per_sec": 0 00:27:20.089 }, 00:27:20.089 "claimed": false, 00:27:20.089 "zoned": false, 00:27:20.089 "supported_io_types": { 00:27:20.089 "read": true, 00:27:20.089 "write": true, 00:27:20.089 "unmap": true, 00:27:20.089 "flush": false, 00:27:20.089 "reset": true, 00:27:20.089 "nvme_admin": false, 00:27:20.089 "nvme_io": false, 00:27:20.089 "nvme_io_md": false, 00:27:20.089 "write_zeroes": true, 00:27:20.089 "zcopy": false, 00:27:20.089 "get_zone_info": false, 00:27:20.089 "zone_management": false, 00:27:20.089 "zone_append": false, 00:27:20.090 "compare": false, 00:27:20.090 "compare_and_write": false, 00:27:20.090 "abort": false, 00:27:20.090 "seek_hole": true, 00:27:20.090 "seek_data": true, 00:27:20.090 "copy": false, 00:27:20.090 "nvme_iov_md": false 00:27:20.090 }, 00:27:20.090 "driver_specific": { 00:27:20.090 "lvol": { 00:27:20.090 "lvol_store_uuid": "2633549e-ff47-4487-a328-d6310c1ba48c", 00:27:20.090 "base_bdev": "nvme0n1", 00:27:20.090 "thin_provision": true, 00:27:20.090 "num_allocated_clusters": 0, 00:27:20.090 "snapshot": false, 00:27:20.090 "clone": false, 00:27:20.090 "esnap_clone": false 00:27:20.090 } 00:27:20.090 } 00:27:20.090 } 00:27:20.090 ]' 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:20.090 09:59:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:20.351 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:20.612 { 00:27:20.612 "name": "11fd1278-8822-4bb4-b7d5-7343b025c04f", 00:27:20.612 "aliases": [ 00:27:20.612 "lvs/nvme0n1p0" 00:27:20.612 ], 00:27:20.612 "product_name": "Logical Volume", 00:27:20.612 "block_size": 4096, 00:27:20.612 "num_blocks": 26476544, 00:27:20.612 "uuid": "11fd1278-8822-4bb4-b7d5-7343b025c04f", 00:27:20.612 "assigned_rate_limits": { 00:27:20.612 "rw_ios_per_sec": 0, 00:27:20.612 "rw_mbytes_per_sec": 0, 00:27:20.612 "r_mbytes_per_sec": 0, 00:27:20.612 "w_mbytes_per_sec": 0 00:27:20.612 }, 00:27:20.612 "claimed": false, 00:27:20.612 "zoned": false, 00:27:20.612 "supported_io_types": { 00:27:20.612 "read": true, 00:27:20.612 "write": true, 00:27:20.612 "unmap": true, 00:27:20.612 "flush": false, 00:27:20.612 "reset": true, 00:27:20.612 "nvme_admin": false, 00:27:20.612 "nvme_io": false, 00:27:20.612 "nvme_io_md": false, 00:27:20.612 "write_zeroes": true, 00:27:20.612 "zcopy": false, 00:27:20.612 "get_zone_info": false, 00:27:20.612 "zone_management": false, 00:27:20.612 "zone_append": false, 00:27:20.612 "compare": false, 00:27:20.612 "compare_and_write": false, 00:27:20.612 "abort": false, 00:27:20.612 "seek_hole": true, 00:27:20.612 "seek_data": true, 00:27:20.612 "copy": false, 00:27:20.612 "nvme_iov_md": false 00:27:20.612 }, 00:27:20.612 "driver_specific": { 00:27:20.612 "lvol": { 00:27:20.612 "lvol_store_uuid": "2633549e-ff47-4487-a328-d6310c1ba48c", 00:27:20.612 "base_bdev": "nvme0n1", 00:27:20.612 "thin_provision": true, 00:27:20.612 "num_allocated_clusters": 0, 00:27:20.612 "snapshot": false, 00:27:20.612 "clone": false, 00:27:20.612 "esnap_clone": false 00:27:20.612 } 00:27:20.612 } 00:27:20.612 } 00:27:20.612 ]' 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:20.612 09:59:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:20.873 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11fd1278-8822-4bb4-b7d5-7343b025c04f 00:27:21.134 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:21.134 { 00:27:21.134 "name": "11fd1278-8822-4bb4-b7d5-7343b025c04f", 00:27:21.134 "aliases": [ 00:27:21.134 "lvs/nvme0n1p0" 00:27:21.134 ], 00:27:21.134 "product_name": "Logical Volume", 00:27:21.134 "block_size": 4096, 00:27:21.134 "num_blocks": 26476544, 00:27:21.134 "uuid": "11fd1278-8822-4bb4-b7d5-7343b025c04f", 00:27:21.134 "assigned_rate_limits": { 00:27:21.135 "rw_ios_per_sec": 0, 00:27:21.135 "rw_mbytes_per_sec": 0, 00:27:21.135 "r_mbytes_per_sec": 0, 00:27:21.135 "w_mbytes_per_sec": 0 00:27:21.135 }, 00:27:21.135 "claimed": false, 00:27:21.135 "zoned": false, 00:27:21.135 "supported_io_types": { 00:27:21.135 "read": true, 00:27:21.135 "write": true, 00:27:21.135 "unmap": true, 00:27:21.135 "flush": false, 00:27:21.135 "reset": true, 00:27:21.135 "nvme_admin": false, 00:27:21.135 "nvme_io": false, 00:27:21.135 "nvme_io_md": false, 00:27:21.135 "write_zeroes": true, 00:27:21.135 "zcopy": false, 00:27:21.135 "get_zone_info": false, 00:27:21.135 "zone_management": false, 00:27:21.135 "zone_append": false, 00:27:21.135 "compare": false, 00:27:21.135 "compare_and_write": false, 00:27:21.135 "abort": false, 00:27:21.135 "seek_hole": true, 00:27:21.135 "seek_data": true, 00:27:21.135 "copy": false, 00:27:21.135 "nvme_iov_md": false 00:27:21.135 }, 00:27:21.135 "driver_specific": { 00:27:21.135 "lvol": { 00:27:21.135 "lvol_store_uuid": "2633549e-ff47-4487-a328-d6310c1ba48c", 00:27:21.135 "base_bdev": "nvme0n1", 00:27:21.135 "thin_provision": true, 00:27:21.135 "num_allocated_clusters": 0, 00:27:21.135 "snapshot": false, 00:27:21.135 "clone": false, 00:27:21.135 "esnap_clone": false 00:27:21.135 } 00:27:21.135 } 00:27:21.135 } 00:27:21.135 ]' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 11fd1278-8822-4bb4-b7d5-7343b025c04f --l2p_dram_limit 10' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:21.135 09:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 11fd1278-8822-4bb4-b7d5-7343b025c04f --l2p_dram_limit 10 -c nvc0n1p0 00:27:21.397 [2024-11-28 10:00:00.036778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.397 [2024-11-28 10:00:00.036826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:21.397 [2024-11-28 10:00:00.036841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.397 [2024-11-28 10:00:00.036848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.397 [2024-11-28 10:00:00.036904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.397 [2024-11-28 10:00:00.036912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.397 [2024-11-28 10:00:00.036921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:21.397 [2024-11-28 10:00:00.036928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.397 [2024-11-28 10:00:00.036946] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:21.397 [2024-11-28 10:00:00.037561] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:21.397 [2024-11-28 10:00:00.037586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.397 [2024-11-28 10:00:00.037594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.397 [2024-11-28 10:00:00.037603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:27:21.397 [2024-11-28 10:00:00.037609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.397 [2024-11-28 10:00:00.037666] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7a4e906c-1038-4a20-9093-7ce808a69d46 00:27:21.397 [2024-11-28 10:00:00.038988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.397 [2024-11-28 10:00:00.039020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:21.397 [2024-11-28 10:00:00.039030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:21.397 [2024-11-28 10:00:00.039039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.397 [2024-11-28 10:00:00.046017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.046049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.398 [2024-11-28 10:00:00.046057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:27:21.398 [2024-11-28 10:00:00.046066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.046140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.046149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.398 [2024-11-28 10:00:00.046174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:21.398 [2024-11-28 10:00:00.046187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.046232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.046243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:21.398 [2024-11-28 10:00:00.046253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:21.398 [2024-11-28 10:00:00.046261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.046279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:21.398 [2024-11-28 10:00:00.049649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.049674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.398 [2024-11-28 10:00:00.049684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.371 ms 00:27:21.398 [2024-11-28 10:00:00.049690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.049720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.049727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:21.398 [2024-11-28 10:00:00.049735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:21.398 [2024-11-28 10:00:00.049742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.049757] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:21.398 [2024-11-28 10:00:00.049874] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:21.398 [2024-11-28 10:00:00.049889] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:21.398 [2024-11-28 10:00:00.049918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:21.398 [2024-11-28 10:00:00.049929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:21.398 [2024-11-28 10:00:00.049936] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:21.398 [2024-11-28 10:00:00.049945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:21.398 [2024-11-28 10:00:00.049953] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:21.398 [2024-11-28 10:00:00.049961] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:21.398 [2024-11-28 10:00:00.049967] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:21.398 [2024-11-28 10:00:00.049975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.049988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:21.398 [2024-11-28 10:00:00.049998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:27:21.398 [2024-11-28 10:00:00.050004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.050074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.398 [2024-11-28 10:00:00.050082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:21.398 [2024-11-28 10:00:00.050090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:21.398 [2024-11-28 10:00:00.050096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.398 [2024-11-28 10:00:00.050207] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:21.398 [2024-11-28 10:00:00.050217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:21.398 [2024-11-28 10:00:00.050226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:21.398 [2024-11-28 10:00:00.050248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:21.398 [2024-11-28 10:00:00.050269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.398 [2024-11-28 10:00:00.050283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:21.398 [2024-11-28 10:00:00.050289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:21.398 [2024-11-28 10:00:00.050296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:21.398 [2024-11-28 10:00:00.050302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:21.398 [2024-11-28 10:00:00.050309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:21.398 [2024-11-28 10:00:00.050315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:21.398 [2024-11-28 10:00:00.050331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:21.398 [2024-11-28 10:00:00.050352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:21.398 [2024-11-28 10:00:00.050372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:21.398 [2024-11-28 10:00:00.050393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:21.398 [2024-11-28 10:00:00.050411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:21.398 [2024-11-28 10:00:00.050431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.398 [2024-11-28 10:00:00.050444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:21.398 [2024-11-28 10:00:00.050449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:21.398 [2024-11-28 10:00:00.050456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:21.398 [2024-11-28 10:00:00.050461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:21.398 [2024-11-28 10:00:00.050468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:21.398 [2024-11-28 10:00:00.050474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:21.398 [2024-11-28 10:00:00.050487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:21.398 [2024-11-28 10:00:00.050493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050498] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:21.398 [2024-11-28 10:00:00.050508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:21.398 [2024-11-28 10:00:00.050515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:21.398 [2024-11-28 10:00:00.050529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:21.398 [2024-11-28 10:00:00.050537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:21.398 [2024-11-28 10:00:00.050543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:21.398 [2024-11-28 10:00:00.050550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:21.398 [2024-11-28 10:00:00.050556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:21.398 [2024-11-28 10:00:00.050563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:21.398 [2024-11-28 10:00:00.050573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:21.398 [2024-11-28 10:00:00.050585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.398 [2024-11-28 10:00:00.050593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:21.398 [2024-11-28 10:00:00.050601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:21.398 [2024-11-28 10:00:00.050607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:21.398 [2024-11-28 10:00:00.050614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:21.398 [2024-11-28 10:00:00.050619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:21.398 [2024-11-28 10:00:00.050627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:21.398 [2024-11-28 10:00:00.050632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:21.398 [2024-11-28 10:00:00.050640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:21.399 [2024-11-28 10:00:00.050646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:21.399 [2024-11-28 10:00:00.050655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:21.399 [2024-11-28 10:00:00.050660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:21.399 [2024-11-28 10:00:00.050669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:21.399 [2024-11-28 10:00:00.050675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:21.399 [2024-11-28 10:00:00.050682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:21.399 [2024-11-28 10:00:00.050687] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:21.399 [2024-11-28 10:00:00.050696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:21.399 [2024-11-28 10:00:00.050703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:21.399 [2024-11-28 10:00:00.050710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:21.399 [2024-11-28 10:00:00.050716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:21.399 [2024-11-28 10:00:00.050723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:21.399 [2024-11-28 10:00:00.050730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.399 [2024-11-28 10:00:00.050737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:21.399 [2024-11-28 10:00:00.050744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:27:21.399 [2024-11-28 10:00:00.050751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.399 [2024-11-28 10:00:00.050783] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:21.399 [2024-11-28 10:00:00.050794] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:25.609 [2024-11-28 10:00:03.895945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.895999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:25.609 [2024-11-28 10:00:03.896012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3845.148 ms 00:27:25.609 [2024-11-28 10:00:03.896021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.919866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.919908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.609 [2024-11-28 10:00:03.919920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.674 ms 00:27:25.609 [2024-11-28 10:00:03.919929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.920033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.920043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:25.609 [2024-11-28 10:00:03.920051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:25.609 [2024-11-28 10:00:03.920064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.946854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.946886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.609 [2024-11-28 10:00:03.946895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.763 ms 00:27:25.609 [2024-11-28 10:00:03.946904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.946931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.946939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.609 [2024-11-28 10:00:03.946946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:25.609 [2024-11-28 10:00:03.946959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.947387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.947411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.609 [2024-11-28 10:00:03.947419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:27:25.609 [2024-11-28 10:00:03.947428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.947512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.947524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.609 [2024-11-28 10:00:03.947531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:25.609 [2024-11-28 10:00:03.947541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.960678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.960705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.609 [2024-11-28 10:00:03.960713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.122 ms 00:27:25.609 [2024-11-28 10:00:03.960721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:03.990830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:25.609 [2024-11-28 10:00:03.993864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:03.993891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:25.609 [2024-11-28 10:00:03.993910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.084 ms 00:27:25.609 [2024-11-28 10:00:03.993918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.067298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.067331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:25.609 [2024-11-28 10:00:04.067343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.350 ms 00:27:25.609 [2024-11-28 10:00:04.067350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.067503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.067512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:25.609 [2024-11-28 10:00:04.067524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:27:25.609 [2024-11-28 10:00:04.067530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.086202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.086230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:25.609 [2024-11-28 10:00:04.086241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.634 ms 00:27:25.609 [2024-11-28 10:00:04.086248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.104247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.104273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:25.609 [2024-11-28 10:00:04.104284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.964 ms 00:27:25.609 [2024-11-28 10:00:04.104290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.104731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.104746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:25.609 [2024-11-28 10:00:04.104758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:27:25.609 [2024-11-28 10:00:04.104765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.167551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.167580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:25.609 [2024-11-28 10:00:04.167593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.751 ms 00:27:25.609 [2024-11-28 10:00:04.167601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.609 [2024-11-28 10:00:04.187569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.609 [2024-11-28 10:00:04.187598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:25.609 [2024-11-28 10:00:04.187609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.911 ms 00:27:25.609 [2024-11-28 10:00:04.187616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.610 [2024-11-28 10:00:04.205890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.610 [2024-11-28 10:00:04.205924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:25.610 [2024-11-28 10:00:04.205934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.243 ms 00:27:25.610 [2024-11-28 10:00:04.205940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.610 [2024-11-28 10:00:04.225116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.610 [2024-11-28 10:00:04.225143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:25.610 [2024-11-28 10:00:04.225159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.145 ms 00:27:25.610 [2024-11-28 10:00:04.225166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.610 [2024-11-28 10:00:04.225200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.610 [2024-11-28 10:00:04.225208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:25.610 [2024-11-28 10:00:04.225219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:25.610 [2024-11-28 10:00:04.225225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.610 [2024-11-28 10:00:04.225293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.610 [2024-11-28 10:00:04.225303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:25.610 [2024-11-28 10:00:04.225311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:25.610 [2024-11-28 10:00:04.225318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.610 [2024-11-28 10:00:04.226169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4188.976 ms, result 0 00:27:25.610 { 00:27:25.610 "name": "ftl0", 00:27:25.610 "uuid": "7a4e906c-1038-4a20-9093-7ce808a69d46" 00:27:25.610 } 00:27:25.610 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:25.610 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:25.610 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:25.610 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:25.610 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:25.872 /dev/nbd0 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:25.872 1+0 records in 00:27:25.872 1+0 records out 00:27:25.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253414 s, 16.2 MB/s 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:25.872 10:00:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:26.134 [2024-11-28 10:00:04.752012] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:26.134 [2024-11-28 10:00:04.752128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81549 ] 00:27:26.134 [2024-11-28 10:00:04.912262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.134 [2024-11-28 10:00:05.007322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.524  [2024-11-28T10:00:07.348Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-28T10:00:08.291Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-28T10:00:09.228Z] Copying: 588/1024 [MB] (196 MBps) [2024-11-28T10:00:10.163Z] Copying: 837/1024 [MB] (248 MBps) [2024-11-28T10:00:10.734Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:27:31.854 00:27:31.854 10:00:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:33.889 10:00:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:33.889 [2024-11-28 10:00:12.609374] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:33.889 [2024-11-28 10:00:12.609495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81633 ] 00:27:34.150 [2024-11-28 10:00:12.768910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.150 [2024-11-28 10:00:12.867778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.539  [2024-11-28T10:00:15.355Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-28T10:00:16.288Z] Copying: 33/1024 [MB] (19 MBps) [2024-11-28T10:00:17.221Z] Copying: 66/1024 [MB] (33 MBps) [2024-11-28T10:00:18.152Z] Copying: 94/1024 [MB] (27 MBps) [2024-11-28T10:00:19.524Z] Copying: 126/1024 [MB] (32 MBps) [2024-11-28T10:00:20.090Z] Copying: 160/1024 [MB] (33 MBps) [2024-11-28T10:00:21.463Z] Copying: 194/1024 [MB] (34 MBps) [2024-11-28T10:00:22.398Z] Copying: 226/1024 [MB] (31 MBps) [2024-11-28T10:00:23.331Z] Copying: 261/1024 [MB] (35 MBps) [2024-11-28T10:00:24.264Z] Copying: 296/1024 [MB] (34 MBps) [2024-11-28T10:00:25.198Z] Copying: 331/1024 [MB] (35 MBps) [2024-11-28T10:00:26.132Z] Copying: 366/1024 [MB] (34 MBps) [2024-11-28T10:00:27.507Z] Copying: 401/1024 [MB] (35 MBps) [2024-11-28T10:00:28.441Z] Copying: 435/1024 [MB] (34 MBps) [2024-11-28T10:00:29.375Z] Copying: 470/1024 [MB] (35 MBps) [2024-11-28T10:00:30.310Z] Copying: 506/1024 [MB] (35 MBps) [2024-11-28T10:00:31.244Z] Copying: 540/1024 [MB] (34 MBps) [2024-11-28T10:00:32.179Z] Copying: 575/1024 [MB] (34 MBps) [2024-11-28T10:00:33.113Z] Copying: 610/1024 [MB] (35 MBps) [2024-11-28T10:00:34.485Z] Copying: 643/1024 [MB] (32 MBps) [2024-11-28T10:00:35.417Z] Copying: 676/1024 [MB] (32 MBps) [2024-11-28T10:00:36.352Z] Copying: 711/1024 [MB] (35 MBps) [2024-11-28T10:00:37.286Z] Copying: 745/1024 [MB] (34 MBps) [2024-11-28T10:00:38.221Z] Copying: 780/1024 [MB] (34 MBps) [2024-11-28T10:00:39.156Z] Copying: 816/1024 [MB] (35 MBps) [2024-11-28T10:00:40.091Z] Copying: 851/1024 [MB] (35 MBps) [2024-11-28T10:00:41.464Z] Copying: 885/1024 [MB] (33 MBps) [2024-11-28T10:00:42.395Z] Copying: 917/1024 [MB] (32 MBps) [2024-11-28T10:00:43.327Z] Copying: 952/1024 [MB] (34 MBps) [2024-11-28T10:00:44.264Z] Copying: 987/1024 [MB] (35 MBps) [2024-11-28T10:00:44.264Z] Copying: 1021/1024 [MB] (33 MBps) [2024-11-28T10:00:44.833Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:28:05.953 00:28:05.953 10:00:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:05.953 10:00:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:06.214 10:00:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:06.476 [2024-11-28 10:00:45.151778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.151826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:06.476 [2024-11-28 10:00:45.151838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:06.476 [2024-11-28 10:00:45.151849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.151867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:06.476 [2024-11-28 10:00:45.154068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.154093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:06.476 [2024-11-28 10:00:45.154103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:28:06.476 [2024-11-28 10:00:45.154110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.156768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.156795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:06.476 [2024-11-28 10:00:45.156806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:28:06.476 [2024-11-28 10:00:45.156812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.171997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.172023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:06.476 [2024-11-28 10:00:45.172033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.165 ms 00:28:06.476 [2024-11-28 10:00:45.172039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.176661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.176682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:06.476 [2024-11-28 10:00:45.176692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:28:06.476 [2024-11-28 10:00:45.176698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.195252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.195277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:06.476 [2024-11-28 10:00:45.195287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.502 ms 00:28:06.476 [2024-11-28 10:00:45.195293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.208230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.208256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:06.476 [2024-11-28 10:00:45.208269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.905 ms 00:28:06.476 [2024-11-28 10:00:45.208276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.208409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.208419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:06.476 [2024-11-28 10:00:45.208429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:28:06.476 [2024-11-28 10:00:45.208435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.226815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.226840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:06.476 [2024-11-28 10:00:45.226860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.365 ms 00:28:06.476 [2024-11-28 10:00:45.226866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.244804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.244829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:06.476 [2024-11-28 10:00:45.244839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.908 ms 00:28:06.476 [2024-11-28 10:00:45.244845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.262188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.262219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:06.476 [2024-11-28 10:00:45.262228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:28:06.476 [2024-11-28 10:00:45.262234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.279120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.476 [2024-11-28 10:00:45.279144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:06.476 [2024-11-28 10:00:45.279160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.829 ms 00:28:06.476 [2024-11-28 10:00:45.279166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.476 [2024-11-28 10:00:45.279194] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:06.477 [2024-11-28 10:00:45.279206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:06.477 [2024-11-28 10:00:45.279835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:06.478 [2024-11-28 10:00:45.279917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:06.478 [2024-11-28 10:00:45.279924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7a4e906c-1038-4a20-9093-7ce808a69d46 00:28:06.478 [2024-11-28 10:00:45.279931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:06.478 [2024-11-28 10:00:45.279939] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:06.478 [2024-11-28 10:00:45.279947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:06.478 [2024-11-28 10:00:45.279954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:06.478 [2024-11-28 10:00:45.279960] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:06.478 [2024-11-28 10:00:45.279968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:06.478 [2024-11-28 10:00:45.279974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:06.478 [2024-11-28 10:00:45.279980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:06.478 [2024-11-28 10:00:45.279986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:06.478 [2024-11-28 10:00:45.279993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.478 [2024-11-28 10:00:45.279999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:06.478 [2024-11-28 10:00:45.280006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:28:06.478 [2024-11-28 10:00:45.280012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.478 [2024-11-28 10:00:45.290023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.478 [2024-11-28 10:00:45.290047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:06.478 [2024-11-28 10:00:45.290057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.985 ms 00:28:06.478 [2024-11-28 10:00:45.290064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.478 [2024-11-28 10:00:45.290375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.478 [2024-11-28 10:00:45.290389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:06.478 [2024-11-28 10:00:45.290397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:28:06.478 [2024-11-28 10:00:45.290404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.478 [2024-11-28 10:00:45.325069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.478 [2024-11-28 10:00:45.325095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:06.478 [2024-11-28 10:00:45.325106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.478 [2024-11-28 10:00:45.325114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.478 [2024-11-28 10:00:45.325168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.478 [2024-11-28 10:00:45.325175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:06.478 [2024-11-28 10:00:45.325183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.478 [2024-11-28 10:00:45.325189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.478 [2024-11-28 10:00:45.325271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.478 [2024-11-28 10:00:45.325283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:06.478 [2024-11-28 10:00:45.325291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.478 [2024-11-28 10:00:45.325298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.478 [2024-11-28 10:00:45.325314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.478 [2024-11-28 10:00:45.325321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:06.478 [2024-11-28 10:00:45.325328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.478 [2024-11-28 10:00:45.325334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.387367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.387401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:06.739 [2024-11-28 10:00:45.387411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.387417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.437718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.437753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:06.739 [2024-11-28 10:00:45.437765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.437771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.437843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.437852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:06.739 [2024-11-28 10:00:45.437863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.437870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.437924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.437932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:06.739 [2024-11-28 10:00:45.437949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.437955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.438034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.438043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:06.739 [2024-11-28 10:00:45.438051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.438059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.438088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.438095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:06.739 [2024-11-28 10:00:45.438102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.438108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.438145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.438163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:06.739 [2024-11-28 10:00:45.438172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.438180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.438224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.739 [2024-11-28 10:00:45.438231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:06.739 [2024-11-28 10:00:45.438239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.739 [2024-11-28 10:00:45.438245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.739 [2024-11-28 10:00:45.438370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 286.552 ms, result 0 00:28:06.739 true 00:28:06.739 10:00:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81406 00:28:06.739 10:00:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81406 00:28:06.739 10:00:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:06.739 [2024-11-28 10:00:45.514905] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:06.739 [2024-11-28 10:00:45.514996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81976 ] 00:28:07.051 [2024-11-28 10:00:45.664480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.051 [2024-11-28 10:00:45.755543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.481  [2024-11-28T10:00:48.303Z] Copying: 250/1024 [MB] (250 MBps) [2024-11-28T10:00:49.247Z] Copying: 505/1024 [MB] (254 MBps) [2024-11-28T10:00:50.187Z] Copying: 754/1024 [MB] (249 MBps) [2024-11-28T10:00:50.187Z] Copying: 1002/1024 [MB] (248 MBps) [2024-11-28T10:00:50.758Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:28:11.878 00:28:11.878 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81406 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:11.878 10:00:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:11.878 [2024-11-28 10:00:50.724275] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:11.878 [2024-11-28 10:00:50.724395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82034 ] 00:28:12.139 [2024-11-28 10:00:50.878925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.139 [2024-11-28 10:00:50.966471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.401 [2024-11-28 10:00:51.200091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:12.401 [2024-11-28 10:00:51.200146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:12.401 [2024-11-28 10:00:51.263590] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:12.401 [2024-11-28 10:00:51.264181] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:12.401 [2024-11-28 10:00:51.264632] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:12.975 [2024-11-28 10:00:51.713674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.975 [2024-11-28 10:00:51.713857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:12.975 [2024-11-28 10:00:51.713875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:12.975 [2024-11-28 10:00:51.713887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.975 [2024-11-28 10:00:51.713935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.713952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.976 [2024-11-28 10:00:51.713960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:12.976 [2024-11-28 10:00:51.713965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.713982] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:12.976 [2024-11-28 10:00:51.714546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:12.976 [2024-11-28 10:00:51.714561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.714569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.976 [2024-11-28 10:00:51.714576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:28:12.976 [2024-11-28 10:00:51.714582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.715838] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:12.976 [2024-11-28 10:00:51.726370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.726397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:12.976 [2024-11-28 10:00:51.726407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.533 ms 00:28:12.976 [2024-11-28 10:00:51.726414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.726458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.726466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:12.976 [2024-11-28 10:00:51.726472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:12.976 [2024-11-28 10:00:51.726478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.732659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.732797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.976 [2024-11-28 10:00:51.732810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.138 ms 00:28:12.976 [2024-11-28 10:00:51.732817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.732878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.732885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.976 [2024-11-28 10:00:51.732891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:12.976 [2024-11-28 10:00:51.732897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.732932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.732940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:12.976 [2024-11-28 10:00:51.732947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:12.976 [2024-11-28 10:00:51.732953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.732968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:12.976 [2024-11-28 10:00:51.736009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.736109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.976 [2024-11-28 10:00:51.736121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:28:12.976 [2024-11-28 10:00:51.736127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.736171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.736179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:12.976 [2024-11-28 10:00:51.736186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:12.976 [2024-11-28 10:00:51.736192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.736209] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:12.976 [2024-11-28 10:00:51.736226] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:12.976 [2024-11-28 10:00:51.736255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:12.976 [2024-11-28 10:00:51.736267] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:12.976 [2024-11-28 10:00:51.736351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:12.976 [2024-11-28 10:00:51.736360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:12.976 [2024-11-28 10:00:51.736368] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:12.976 [2024-11-28 10:00:51.736378] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:12.976 [2024-11-28 10:00:51.736386] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:12.976 [2024-11-28 10:00:51.736392] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:12.976 [2024-11-28 10:00:51.736399] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:12.976 [2024-11-28 10:00:51.736404] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:12.976 [2024-11-28 10:00:51.736410] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:12.976 [2024-11-28 10:00:51.736416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.736422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:12.976 [2024-11-28 10:00:51.736428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:28:12.976 [2024-11-28 10:00:51.736434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.736497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.976 [2024-11-28 10:00:51.736506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:12.976 [2024-11-28 10:00:51.736512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:12.976 [2024-11-28 10:00:51.736517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.976 [2024-11-28 10:00:51.736607] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:12.976 [2024-11-28 10:00:51.736616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:12.976 [2024-11-28 10:00:51.736623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:12.976 [2024-11-28 10:00:51.736629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.976 [2024-11-28 10:00:51.736635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:12.976 [2024-11-28 10:00:51.736641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:12.976 [2024-11-28 10:00:51.736646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:12.976 [2024-11-28 10:00:51.736651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:12.976 [2024-11-28 10:00:51.736657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:12.976 [2024-11-28 10:00:51.736668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:12.976 [2024-11-28 10:00:51.736673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:12.976 [2024-11-28 10:00:51.736679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:12.976 [2024-11-28 10:00:51.736683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:12.976 [2024-11-28 10:00:51.736689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:12.976 [2024-11-28 10:00:51.736695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:12.976 [2024-11-28 10:00:51.736700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.976 [2024-11-28 10:00:51.736705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:12.976 [2024-11-28 10:00:51.736711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:12.976 [2024-11-28 10:00:51.736717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.976 [2024-11-28 10:00:51.736722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:12.976 [2024-11-28 10:00:51.736727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:12.976 [2024-11-28 10:00:51.736732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.977 [2024-11-28 10:00:51.736738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:12.977 [2024-11-28 10:00:51.736743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.977 [2024-11-28 10:00:51.736752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:12.977 [2024-11-28 10:00:51.736758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.977 [2024-11-28 10:00:51.736768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:12.977 [2024-11-28 10:00:51.736773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:12.977 [2024-11-28 10:00:51.736783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:12.977 [2024-11-28 10:00:51.736788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:12.977 [2024-11-28 10:00:51.736799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:12.977 [2024-11-28 10:00:51.736804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:12.977 [2024-11-28 10:00:51.736809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:12.977 [2024-11-28 10:00:51.736815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:12.977 [2024-11-28 10:00:51.736820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:12.977 [2024-11-28 10:00:51.736824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:12.977 [2024-11-28 10:00:51.736834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:12.977 [2024-11-28 10:00:51.736840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736846] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:12.977 [2024-11-28 10:00:51.736852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:12.977 [2024-11-28 10:00:51.736859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:12.977 [2024-11-28 10:00:51.736865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:12.977 [2024-11-28 10:00:51.736872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:12.977 [2024-11-28 10:00:51.736877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:12.977 [2024-11-28 10:00:51.736882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:12.977 [2024-11-28 10:00:51.736888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:12.977 [2024-11-28 10:00:51.736893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:12.977 [2024-11-28 10:00:51.736898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:12.977 [2024-11-28 10:00:51.736904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:12.977 [2024-11-28 10:00:51.736911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.736917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:12.977 [2024-11-28 10:00:51.736923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:12.977 [2024-11-28 10:00:51.736928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:12.977 [2024-11-28 10:00:51.736934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:12.977 [2024-11-28 10:00:51.736940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:12.977 [2024-11-28 10:00:51.736945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:12.977 [2024-11-28 10:00:51.736950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:12.977 [2024-11-28 10:00:51.736955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:12.977 [2024-11-28 10:00:51.736960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:12.977 [2024-11-28 10:00:51.736965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.736971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.736976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.736982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.736987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:12.977 [2024-11-28 10:00:51.736992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:12.977 [2024-11-28 10:00:51.736998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.737004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:12.977 [2024-11-28 10:00:51.737010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:12.977 [2024-11-28 10:00:51.737016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:12.977 [2024-11-28 10:00:51.737022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:12.977 [2024-11-28 10:00:51.737028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.977 [2024-11-28 10:00:51.737033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:12.977 [2024-11-28 10:00:51.737039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:28:12.977 [2024-11-28 10:00:51.737045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.977 [2024-11-28 10:00:51.761582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.977 [2024-11-28 10:00:51.761697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:12.977 [2024-11-28 10:00:51.761739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.490 ms 00:28:12.977 [2024-11-28 10:00:51.761758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.977 [2024-11-28 10:00:51.761839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.977 [2024-11-28 10:00:51.761856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:12.977 [2024-11-28 10:00:51.761872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:12.977 [2024-11-28 10:00:51.761887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.977 [2024-11-28 10:00:51.807845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.977 [2024-11-28 10:00:51.807969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:12.977 [2024-11-28 10:00:51.808020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.908 ms 00:28:12.977 [2024-11-28 10:00:51.808039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.977 [2024-11-28 10:00:51.808081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.977 [2024-11-28 10:00:51.808100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:12.977 [2024-11-28 10:00:51.808116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:12.977 [2024-11-28 10:00:51.808131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.977 [2024-11-28 10:00:51.808578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.977 [2024-11-28 10:00:51.808715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:12.978 [2024-11-28 10:00:51.808764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:28:12.978 [2024-11-28 10:00:51.808788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.978 [2024-11-28 10:00:51.808915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.978 [2024-11-28 10:00:51.808934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:12.978 [2024-11-28 10:00:51.808949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:12.978 [2024-11-28 10:00:51.808965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.978 [2024-11-28 10:00:51.820786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.978 [2024-11-28 10:00:51.820870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:12.978 [2024-11-28 10:00:51.820907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.797 ms 00:28:12.978 [2024-11-28 10:00:51.820924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.978 [2024-11-28 10:00:51.831751] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:12.978 [2024-11-28 10:00:51.831845] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:12.978 [2024-11-28 10:00:51.831955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.978 [2024-11-28 10:00:51.831973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:12.978 [2024-11-28 10:00:51.831989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.942 ms 00:28:12.978 [2024-11-28 10:00:51.832004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.978 [2024-11-28 10:00:51.850589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.978 [2024-11-28 10:00:51.850682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:12.978 [2024-11-28 10:00:51.850721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.547 ms 00:28:12.978 [2024-11-28 10:00:51.850739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.859934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.860020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:13.240 [2024-11-28 10:00:51.860060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.160 ms 00:28:13.240 [2024-11-28 10:00:51.860077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.868947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.869032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:13.240 [2024-11-28 10:00:51.869071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.838 ms 00:28:13.240 [2024-11-28 10:00:51.869088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.869622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.869699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:13.240 [2024-11-28 10:00:51.869738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:28:13.240 [2024-11-28 10:00:51.869755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.918625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.918758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:13.240 [2024-11-28 10:00:51.918800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.845 ms 00:28:13.240 [2024-11-28 10:00:51.918818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.927263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:13.240 [2024-11-28 10:00:51.929827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.929906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:13.240 [2024-11-28 10:00:51.929971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.970 ms 00:28:13.240 [2024-11-28 10:00:51.929994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.930085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.930107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:13.240 [2024-11-28 10:00:51.930123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:13.240 [2024-11-28 10:00:51.930138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.930220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.930242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:13.240 [2024-11-28 10:00:51.930259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:13.240 [2024-11-28 10:00:51.930320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.930358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.930375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:13.240 [2024-11-28 10:00:51.930391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.240 [2024-11-28 10:00:51.930406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.930443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:13.240 [2024-11-28 10:00:51.930462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.930477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:13.240 [2024-11-28 10:00:51.930493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:13.240 [2024-11-28 10:00:51.930534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.949226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.949325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:13.240 [2024-11-28 10:00:51.949365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.666 ms 00:28:13.240 [2024-11-28 10:00:51.949383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.949658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.240 [2024-11-28 10:00:51.949709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:13.240 [2024-11-28 10:00:51.949792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:13.240 [2024-11-28 10:00:51.949911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.240 [2024-11-28 10:00:51.950912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 236.851 ms, result 0 00:28:14.185  [2024-11-28T10:00:54.009Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-28T10:00:55.397Z] Copying: 21/1024 [MB] (11 MBps) [2024-11-28T10:00:55.969Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-28T10:00:57.356Z] Copying: 47/1024 [MB] (13 MBps) [2024-11-28T10:00:58.297Z] Copying: 58/1024 [MB] (11 MBps) [2024-11-28T10:00:59.240Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-28T10:01:00.206Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-28T10:01:01.148Z] Copying: 91/1024 [MB] (11 MBps) [2024-11-28T10:01:02.092Z] Copying: 102/1024 [MB] (11 MBps) [2024-11-28T10:01:03.038Z] Copying: 113/1024 [MB] (11 MBps) [2024-11-28T10:01:03.984Z] Copying: 124/1024 [MB] (11 MBps) [2024-11-28T10:01:05.373Z] Copying: 136/1024 [MB] (11 MBps) [2024-11-28T10:01:06.316Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-28T10:01:07.261Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-28T10:01:08.207Z] Copying: 170/1024 [MB] (11 MBps) [2024-11-28T10:01:09.152Z] Copying: 181/1024 [MB] (11 MBps) [2024-11-28T10:01:10.097Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-28T10:01:11.042Z] Copying: 203/1024 [MB] (10 MBps) [2024-11-28T10:01:11.988Z] Copying: 214/1024 [MB] (10 MBps) [2024-11-28T10:01:13.377Z] Copying: 225/1024 [MB] (11 MBps) [2024-11-28T10:01:14.321Z] Copying: 236/1024 [MB] (11 MBps) [2024-11-28T10:01:15.268Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-28T10:01:16.210Z] Copying: 259/1024 [MB] (11 MBps) [2024-11-28T10:01:17.155Z] Copying: 270/1024 [MB] (11 MBps) [2024-11-28T10:01:18.100Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-28T10:01:19.042Z] Copying: 293/1024 [MB] (11 MBps) [2024-11-28T10:01:20.006Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-28T10:01:21.005Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-28T10:01:22.393Z] Copying: 326/1024 [MB] (11 MBps) [2024-11-28T10:01:22.966Z] Copying: 337/1024 [MB] (11 MBps) [2024-11-28T10:01:24.354Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-28T10:01:25.309Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-28T10:01:26.253Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-28T10:01:27.199Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-28T10:01:28.144Z] Copying: 393/1024 [MB] (11 MBps) [2024-11-28T10:01:29.087Z] Copying: 404/1024 [MB] (11 MBps) [2024-11-28T10:01:30.032Z] Copying: 416/1024 [MB] (11 MBps) [2024-11-28T10:01:30.976Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-28T10:01:32.365Z] Copying: 439/1024 [MB] (11 MBps) [2024-11-28T10:01:33.310Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-28T10:01:34.253Z] Copying: 462/1024 [MB] (11 MBps) [2024-11-28T10:01:35.199Z] Copying: 474/1024 [MB] (11 MBps) [2024-11-28T10:01:36.150Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-28T10:01:37.093Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-28T10:01:38.037Z] Copying: 507/1024 [MB] (11 MBps) [2024-11-28T10:01:38.982Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-28T10:01:40.371Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-28T10:01:41.316Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-28T10:01:42.259Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-28T10:01:43.198Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-28T10:01:44.143Z] Copying: 576/1024 [MB] (11 MBps) [2024-11-28T10:01:45.087Z] Copying: 587/1024 [MB] (11 MBps) [2024-11-28T10:01:46.036Z] Copying: 597/1024 [MB] (10 MBps) [2024-11-28T10:01:46.981Z] Copying: 609/1024 [MB] (11 MBps) [2024-11-28T10:01:48.367Z] Copying: 620/1024 [MB] (11 MBps) [2024-11-28T10:01:49.312Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-28T10:01:50.263Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-28T10:01:51.284Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-28T10:01:52.228Z] Copying: 665/1024 [MB] (11 MBps) [2024-11-28T10:01:53.172Z] Copying: 675/1024 [MB] (10 MBps) [2024-11-28T10:01:54.117Z] Copying: 688/1024 [MB] (13 MBps) [2024-11-28T10:01:55.062Z] Copying: 700/1024 [MB] (11 MBps) [2024-11-28T10:01:56.005Z] Copying: 712/1024 [MB] (11 MBps) [2024-11-28T10:01:57.392Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-28T10:01:57.964Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-28T10:01:59.352Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-28T10:02:00.296Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-28T10:02:01.237Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-28T10:02:02.173Z] Copying: 778/1024 [MB] (11 MBps) [2024-11-28T10:02:03.112Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-28T10:02:04.054Z] Copying: 801/1024 [MB] (11 MBps) [2024-11-28T10:02:04.998Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-28T10:02:06.388Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-28T10:02:07.332Z] Copying: 835/1024 [MB] (11 MBps) [2024-11-28T10:02:08.276Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-28T10:02:09.221Z] Copying: 857/1024 [MB] (10 MBps) [2024-11-28T10:02:10.166Z] Copying: 868/1024 [MB] (11 MBps) [2024-11-28T10:02:11.111Z] Copying: 879/1024 [MB] (11 MBps) [2024-11-28T10:02:12.054Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-28T10:02:12.999Z] Copying: 902/1024 [MB] (11 MBps) [2024-11-28T10:02:14.387Z] Copying: 913/1024 [MB] (11 MBps) [2024-11-28T10:02:15.334Z] Copying: 925/1024 [MB] (11 MBps) [2024-11-28T10:02:16.278Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-28T10:02:17.222Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-28T10:02:18.167Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-28T10:02:19.111Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-28T10:02:20.057Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-28T10:02:21.002Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-28T10:02:22.434Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-28T10:02:23.053Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-28T10:02:23.627Z] Copying: 1048112/1048576 [kB] (8736 kBps) [2024-11-28T10:02:23.627Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-28 10:02:23.424831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.424929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:44.747 [2024-11-28 10:02:23.424951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:44.747 [2024-11-28 10:02:23.424961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.747 [2024-11-28 10:02:23.428181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:44.747 [2024-11-28 10:02:23.432097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.432374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:44.747 [2024-11-28 10:02:23.432400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.857 ms 00:29:44.747 [2024-11-28 10:02:23.432417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.747 [2024-11-28 10:02:23.444236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.444299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:44.747 [2024-11-28 10:02:23.444314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.916 ms 00:29:44.747 [2024-11-28 10:02:23.444323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.747 [2024-11-28 10:02:23.467327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.467523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:44.747 [2024-11-28 10:02:23.467545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.984 ms 00:29:44.747 [2024-11-28 10:02:23.467554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.747 [2024-11-28 10:02:23.473768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.473811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:44.747 [2024-11-28 10:02:23.473823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:29:44.747 [2024-11-28 10:02:23.473832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.747 [2024-11-28 10:02:23.501431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.501630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:44.747 [2024-11-28 10:02:23.501651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.551 ms 00:29:44.747 [2024-11-28 10:02:23.501659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.747 [2024-11-28 10:02:23.518982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.747 [2024-11-28 10:02:23.519029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:44.747 [2024-11-28 10:02:23.519043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.234 ms 00:29:44.747 [2024-11-28 10:02:23.519052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.010 [2024-11-28 10:02:23.771372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.010 [2024-11-28 10:02:23.771433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:45.010 [2024-11-28 10:02:23.771454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 252.268 ms 00:29:45.010 [2024-11-28 10:02:23.771464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.010 [2024-11-28 10:02:23.797675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.010 [2024-11-28 10:02:23.797721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:45.010 [2024-11-28 10:02:23.797734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.195 ms 00:29:45.010 [2024-11-28 10:02:23.797756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.010 [2024-11-28 10:02:23.823479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.010 [2024-11-28 10:02:23.823677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:45.010 [2024-11-28 10:02:23.823697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.677 ms 00:29:45.010 [2024-11-28 10:02:23.823706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.010 [2024-11-28 10:02:23.848310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.010 [2024-11-28 10:02:23.848352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:45.010 [2024-11-28 10:02:23.848364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.512 ms 00:29:45.010 [2024-11-28 10:02:23.848371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.010 [2024-11-28 10:02:23.873157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.010 [2024-11-28 10:02:23.873200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:45.010 [2024-11-28 10:02:23.873212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.705 ms 00:29:45.010 [2024-11-28 10:02:23.873220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.010 [2024-11-28 10:02:23.873263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:45.010 [2024-11-28 10:02:23.873279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 91392 / 261120 wr_cnt: 1 state: open 00:29:45.010 [2024-11-28 10:02:23.873289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:45.010 [2024-11-28 10:02:23.873375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.873991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:45.011 [2024-11-28 10:02:23.874107] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:45.011 [2024-11-28 10:02:23.874116] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7a4e906c-1038-4a20-9093-7ce808a69d46 00:29:45.011 [2024-11-28 10:02:23.874137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 91392 00:29:45.011 [2024-11-28 10:02:23.874145] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 92352 00:29:45.012 [2024-11-28 10:02:23.874173] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 91392 00:29:45.012 [2024-11-28 10:02:23.874183] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0105 00:29:45.012 [2024-11-28 10:02:23.874192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:45.012 [2024-11-28 10:02:23.874200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:45.012 [2024-11-28 10:02:23.874208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:45.012 [2024-11-28 10:02:23.874215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:45.012 [2024-11-28 10:02:23.874222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:45.012 [2024-11-28 10:02:23.874229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.012 [2024-11-28 10:02:23.874239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:45.012 [2024-11-28 10:02:23.874272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:29:45.012 [2024-11-28 10:02:23.874280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:23.889180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.273 [2024-11-28 10:02:23.889354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:45.273 [2024-11-28 10:02:23.889372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.879 ms 00:29:45.273 [2024-11-28 10:02:23.889381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:23.889804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.273 [2024-11-28 10:02:23.889817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:45.273 [2024-11-28 10:02:23.889835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:29:45.273 [2024-11-28 10:02:23.889842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:23.929009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:23.929211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:45.273 [2024-11-28 10:02:23.929235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:23.929244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:23.929318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:23.929328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:45.273 [2024-11-28 10:02:23.929345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:23.929354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:23.929429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:23.929443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:45.273 [2024-11-28 10:02:23.929451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:23.929459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:23.929475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:23.929484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:45.273 [2024-11-28 10:02:23.929492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:23.929499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.011851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.011900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:45.273 [2024-11-28 10:02:24.011912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.011921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.067709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.067841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:45.273 [2024-11-28 10:02:24.067856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.067867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.067940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.067948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:45.273 [2024-11-28 10:02:24.067956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.067962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.067995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.068003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:45.273 [2024-11-28 10:02:24.068011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.068017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.068100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.068109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:45.273 [2024-11-28 10:02:24.068116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.068123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.068150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.068173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:45.273 [2024-11-28 10:02:24.068180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.068186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.068226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.068234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:45.273 [2024-11-28 10:02:24.068243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.068250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.273 [2024-11-28 10:02:24.068296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.273 [2024-11-28 10:02:24.068305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:45.273 [2024-11-28 10:02:24.068312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.273 [2024-11-28 10:02:24.068319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.274 [2024-11-28 10:02:24.068433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.576 ms, result 0 00:29:46.217 00:29:46.217 00:29:46.217 10:02:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:48.766 10:02:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:48.766 [2024-11-28 10:02:27.448947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:48.766 [2024-11-28 10:02:27.449347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83015 ] 00:29:48.766 [2024-11-28 10:02:27.613970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.029 [2024-11-28 10:02:27.754388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.291 [2024-11-28 10:02:28.093573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.291 [2024-11-28 10:02:28.093667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.554 [2024-11-28 10:02:28.260271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.260579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:49.555 [2024-11-28 10:02:28.260606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:49.555 [2024-11-28 10:02:28.260617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.260690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.260706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.555 [2024-11-28 10:02:28.260715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:49.555 [2024-11-28 10:02:28.260724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.260746] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:49.555 [2024-11-28 10:02:28.261491] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:49.555 [2024-11-28 10:02:28.261515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.261524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.555 [2024-11-28 10:02:28.261536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:29:49.555 [2024-11-28 10:02:28.261547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.263795] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:49.555 [2024-11-28 10:02:28.279209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.279257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:49.555 [2024-11-28 10:02:28.279272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.415 ms 00:29:49.555 [2024-11-28 10:02:28.279281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.279362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.279373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:49.555 [2024-11-28 10:02:28.279382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:49.555 [2024-11-28 10:02:28.279390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.290742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.290941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.555 [2024-11-28 10:02:28.290961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.275 ms 00:29:49.555 [2024-11-28 10:02:28.290977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.291068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.291078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.555 [2024-11-28 10:02:28.291088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:49.555 [2024-11-28 10:02:28.291096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.291184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.291198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:49.555 [2024-11-28 10:02:28.291208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:49.555 [2024-11-28 10:02:28.291218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.291246] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:49.555 [2024-11-28 10:02:28.295823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.295864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.555 [2024-11-28 10:02:28.295878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:29:49.555 [2024-11-28 10:02:28.295888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.295926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.295935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:49.555 [2024-11-28 10:02:28.295944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:49.555 [2024-11-28 10:02:28.295953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.295990] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:49.555 [2024-11-28 10:02:28.296016] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:49.555 [2024-11-28 10:02:28.296058] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:49.555 [2024-11-28 10:02:28.296079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:49.555 [2024-11-28 10:02:28.296216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:49.555 [2024-11-28 10:02:28.296230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:49.555 [2024-11-28 10:02:28.296243] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:49.555 [2024-11-28 10:02:28.296255] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296266] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296275] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:49.555 [2024-11-28 10:02:28.296284] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:49.555 [2024-11-28 10:02:28.296296] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:49.555 [2024-11-28 10:02:28.296304] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:49.555 [2024-11-28 10:02:28.296313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.296320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:49.555 [2024-11-28 10:02:28.296330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:29:49.555 [2024-11-28 10:02:28.296339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.296425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.555 [2024-11-28 10:02:28.296434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:49.555 [2024-11-28 10:02:28.296442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:49.555 [2024-11-28 10:02:28.296451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.555 [2024-11-28 10:02:28.296560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:49.555 [2024-11-28 10:02:28.296574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:49.555 [2024-11-28 10:02:28.296583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:49.555 [2024-11-28 10:02:28.296609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:49.555 [2024-11-28 10:02:28.296635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.555 [2024-11-28 10:02:28.296651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:49.555 [2024-11-28 10:02:28.296658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:49.555 [2024-11-28 10:02:28.296674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.555 [2024-11-28 10:02:28.296688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:49.555 [2024-11-28 10:02:28.296696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:49.555 [2024-11-28 10:02:28.296703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:49.555 [2024-11-28 10:02:28.296717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:49.555 [2024-11-28 10:02:28.296741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:49.555 [2024-11-28 10:02:28.296762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:49.555 [2024-11-28 10:02:28.296783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:49.555 [2024-11-28 10:02:28.296802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.555 [2024-11-28 10:02:28.296817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:49.555 [2024-11-28 10:02:28.296825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:49.555 [2024-11-28 10:02:28.296831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.555 [2024-11-28 10:02:28.296838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:49.555 [2024-11-28 10:02:28.296844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:49.555 [2024-11-28 10:02:28.296851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.555 [2024-11-28 10:02:28.296859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:49.555 [2024-11-28 10:02:28.296866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:49.555 [2024-11-28 10:02:28.296874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.556 [2024-11-28 10:02:28.296881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:49.556 [2024-11-28 10:02:28.296888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:49.556 [2024-11-28 10:02:28.296896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.556 [2024-11-28 10:02:28.296903] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:49.556 [2024-11-28 10:02:28.296918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:49.556 [2024-11-28 10:02:28.296927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.556 [2024-11-28 10:02:28.296936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.556 [2024-11-28 10:02:28.296944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:49.556 [2024-11-28 10:02:28.296952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:49.556 [2024-11-28 10:02:28.296959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:49.556 [2024-11-28 10:02:28.296966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:49.556 [2024-11-28 10:02:28.296973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:49.556 [2024-11-28 10:02:28.296980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:49.556 [2024-11-28 10:02:28.296990] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:49.556 [2024-11-28 10:02:28.297001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:49.556 [2024-11-28 10:02:28.297021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:49.556 [2024-11-28 10:02:28.297028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:49.556 [2024-11-28 10:02:28.297035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:49.556 [2024-11-28 10:02:28.297044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:49.556 [2024-11-28 10:02:28.297051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:49.556 [2024-11-28 10:02:28.297058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:49.556 [2024-11-28 10:02:28.297065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:49.556 [2024-11-28 10:02:28.297072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:49.556 [2024-11-28 10:02:28.297079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:49.556 [2024-11-28 10:02:28.297115] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:49.556 [2024-11-28 10:02:28.297123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.556 [2024-11-28 10:02:28.297140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:49.556 [2024-11-28 10:02:28.297148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:49.556 [2024-11-28 10:02:28.297173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:49.556 [2024-11-28 10:02:28.297181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.297194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:49.556 [2024-11-28 10:02:28.297204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:29:49.556 [2024-11-28 10:02:28.297212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.335703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.335892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.556 [2024-11-28 10:02:28.335957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.441 ms 00:29:49.556 [2024-11-28 10:02:28.335989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.336102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.336128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:49.556 [2024-11-28 10:02:28.336267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:49.556 [2024-11-28 10:02:28.336307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.389100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.389320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:49.556 [2024-11-28 10:02:28.389401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.697 ms 00:29:49.556 [2024-11-28 10:02:28.389430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.389494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.389520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:49.556 [2024-11-28 10:02:28.389548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:49.556 [2024-11-28 10:02:28.389569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.390385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.390538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:49.556 [2024-11-28 10:02:28.390595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:29:49.556 [2024-11-28 10:02:28.390622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.390821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.390921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:49.556 [2024-11-28 10:02:28.390991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:29:49.556 [2024-11-28 10:02:28.391015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.409332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.409484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:49.556 [2024-11-28 10:02:28.409540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.278 ms 00:29:49.556 [2024-11-28 10:02:28.409564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.556 [2024-11-28 10:02:28.424744] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:49.556 [2024-11-28 10:02:28.424920] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:49.556 [2024-11-28 10:02:28.424985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.556 [2024-11-28 10:02:28.425007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:49.556 [2024-11-28 10:02:28.425030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.290 ms 00:29:49.556 [2024-11-28 10:02:28.425049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.451327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.451488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:49.819 [2024-11-28 10:02:28.451547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.222 ms 00:29:49.819 [2024-11-28 10:02:28.451570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.464653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.464813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:49.819 [2024-11-28 10:02:28.464869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.962 ms 00:29:49.819 [2024-11-28 10:02:28.464893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.477652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.477808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:49.819 [2024-11-28 10:02:28.477866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.702 ms 00:29:49.819 [2024-11-28 10:02:28.477889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.478630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.478697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:49.819 [2024-11-28 10:02:28.478845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:29:49.819 [2024-11-28 10:02:28.478869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.551759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.552000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:49.819 [2024-11-28 10:02:28.552035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.852 ms 00:29:49.819 [2024-11-28 10:02:28.552045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.563547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:49.819 [2024-11-28 10:02:28.567761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.567806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:49.819 [2024-11-28 10:02:28.567820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.612 ms 00:29:49.819 [2024-11-28 10:02:28.567829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.567940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.567953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:49.819 [2024-11-28 10:02:28.567968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:49.819 [2024-11-28 10:02:28.567978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.569980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.570029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:49.819 [2024-11-28 10:02:28.570056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.961 ms 00:29:49.819 [2024-11-28 10:02:28.570066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.570098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.570108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:49.819 [2024-11-28 10:02:28.570118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:49.819 [2024-11-28 10:02:28.570126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.570195] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:49.819 [2024-11-28 10:02:28.570209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.570218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:49.819 [2024-11-28 10:02:28.570228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:49.819 [2024-11-28 10:02:28.570237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.596818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.596868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:49.819 [2024-11-28 10:02:28.596888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.557 ms 00:29:49.819 [2024-11-28 10:02:28.596896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.596993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.819 [2024-11-28 10:02:28.597005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:49.819 [2024-11-28 10:02:28.597015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:49.819 [2024-11-28 10:02:28.597023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.819 [2024-11-28 10:02:28.598605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.704 ms, result 0 00:29:51.207  [2024-11-28T10:02:31.033Z] Copying: 1076/1048576 [kB] (1076 kBps) [2024-11-28T10:02:31.979Z] Copying: 3944/1048576 [kB] (2868 kBps) [2024-11-28T10:02:32.925Z] Copying: 12048/1048576 [kB] (8104 kBps) [2024-11-28T10:02:33.871Z] Copying: 28/1024 [MB] (17 MBps) [2024-11-28T10:02:34.815Z] Copying: 47/1024 [MB] (18 MBps) [2024-11-28T10:02:36.201Z] Copying: 64/1024 [MB] (17 MBps) [2024-11-28T10:02:37.145Z] Copying: 82/1024 [MB] (17 MBps) [2024-11-28T10:02:38.091Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-28T10:02:39.037Z] Copying: 118/1024 [MB] (17 MBps) [2024-11-28T10:02:39.983Z] Copying: 136/1024 [MB] (17 MBps) [2024-11-28T10:02:40.929Z] Copying: 154/1024 [MB] (17 MBps) [2024-11-28T10:02:41.873Z] Copying: 172/1024 [MB] (17 MBps) [2024-11-28T10:02:42.819Z] Copying: 189/1024 [MB] (17 MBps) [2024-11-28T10:02:44.208Z] Copying: 207/1024 [MB] (17 MBps) [2024-11-28T10:02:45.152Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-28T10:02:46.096Z] Copying: 242/1024 [MB] (17 MBps) [2024-11-28T10:02:47.041Z] Copying: 260/1024 [MB] (17 MBps) [2024-11-28T10:02:47.984Z] Copying: 276/1024 [MB] (16 MBps) [2024-11-28T10:02:48.928Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-28T10:02:49.873Z] Copying: 312/1024 [MB] (17 MBps) [2024-11-28T10:02:50.819Z] Copying: 330/1024 [MB] (18 MBps) [2024-11-28T10:02:52.206Z] Copying: 348/1024 [MB] (17 MBps) [2024-11-28T10:02:53.148Z] Copying: 365/1024 [MB] (17 MBps) [2024-11-28T10:02:54.099Z] Copying: 383/1024 [MB] (17 MBps) [2024-11-28T10:02:55.101Z] Copying: 401/1024 [MB] (18 MBps) [2024-11-28T10:02:56.052Z] Copying: 419/1024 [MB] (18 MBps) [2024-11-28T10:02:56.997Z] Copying: 436/1024 [MB] (16 MBps) [2024-11-28T10:02:57.942Z] Copying: 454/1024 [MB] (17 MBps) [2024-11-28T10:02:58.886Z] Copying: 470/1024 [MB] (16 MBps) [2024-11-28T10:02:59.832Z] Copying: 488/1024 [MB] (17 MBps) [2024-11-28T10:03:01.223Z] Copying: 506/1024 [MB] (17 MBps) [2024-11-28T10:03:01.796Z] Copying: 524/1024 [MB] (17 MBps) [2024-11-28T10:03:03.187Z] Copying: 541/1024 [MB] (17 MBps) [2024-11-28T10:03:04.132Z] Copying: 559/1024 [MB] (17 MBps) [2024-11-28T10:03:05.076Z] Copying: 575/1024 [MB] (16 MBps) [2024-11-28T10:03:06.018Z] Copying: 592/1024 [MB] (17 MBps) [2024-11-28T10:03:06.963Z] Copying: 610/1024 [MB] (17 MBps) [2024-11-28T10:03:07.910Z] Copying: 628/1024 [MB] (17 MBps) [2024-11-28T10:03:08.856Z] Copying: 645/1024 [MB] (17 MBps) [2024-11-28T10:03:09.801Z] Copying: 663/1024 [MB] (17 MBps) [2024-11-28T10:03:11.187Z] Copying: 680/1024 [MB] (17 MBps) [2024-11-28T10:03:12.130Z] Copying: 697/1024 [MB] (16 MBps) [2024-11-28T10:03:13.073Z] Copying: 715/1024 [MB] (17 MBps) [2024-11-28T10:03:14.018Z] Copying: 732/1024 [MB] (17 MBps) [2024-11-28T10:03:14.961Z] Copying: 750/1024 [MB] (17 MBps) [2024-11-28T10:03:15.909Z] Copying: 767/1024 [MB] (16 MBps) [2024-11-28T10:03:16.852Z] Copying: 784/1024 [MB] (17 MBps) [2024-11-28T10:03:17.797Z] Copying: 802/1024 [MB] (17 MBps) [2024-11-28T10:03:19.185Z] Copying: 820/1024 [MB] (17 MBps) [2024-11-28T10:03:20.130Z] Copying: 838/1024 [MB] (17 MBps) [2024-11-28T10:03:21.074Z] Copying: 856/1024 [MB] (17 MBps) [2024-11-28T10:03:22.019Z] Copying: 873/1024 [MB] (17 MBps) [2024-11-28T10:03:22.964Z] Copying: 891/1024 [MB] (17 MBps) [2024-11-28T10:03:23.914Z] Copying: 908/1024 [MB] (17 MBps) [2024-11-28T10:03:24.858Z] Copying: 923/1024 [MB] (15 MBps) [2024-11-28T10:03:25.799Z] Copying: 951/1024 [MB] (27 MBps) [2024-11-28T10:03:26.809Z] Copying: 966/1024 [MB] (15 MBps) [2024-11-28T10:03:28.197Z] Copying: 982/1024 [MB] (15 MBps) [2024-11-28T10:03:29.141Z] Copying: 998/1024 [MB] (15 MBps) [2024-11-28T10:03:29.141Z] Copying: 1018/1024 [MB] (20 MBps) [2024-11-28T10:03:29.403Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-28 10:03:29.190247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.190855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:50.523 [2024-11-28 10:03:29.190957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:50.523 [2024-11-28 10:03:29.190987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.523 [2024-11-28 10:03:29.191065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:50.523 [2024-11-28 10:03:29.194188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.194311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:50.523 [2024-11-28 10:03:29.194372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:30:50.523 [2024-11-28 10:03:29.194397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.523 [2024-11-28 10:03:29.194651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.194835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:50.523 [2024-11-28 10:03:29.194864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:30:50.523 [2024-11-28 10:03:29.194885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.523 [2024-11-28 10:03:29.206795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.206896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:50.523 [2024-11-28 10:03:29.206942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.878 ms 00:30:50.523 [2024-11-28 10:03:29.206960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.523 [2024-11-28 10:03:29.211619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.211723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:50.523 [2024-11-28 10:03:29.211776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:30:50.523 [2024-11-28 10:03:29.211794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.523 [2024-11-28 10:03:29.230678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.230776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:50.523 [2024-11-28 10:03:29.230817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.832 ms 00:30:50.523 [2024-11-28 10:03:29.230834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.523 [2024-11-28 10:03:29.243148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.523 [2024-11-28 10:03:29.243243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:50.523 [2024-11-28 10:03:29.243282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.284 ms 00:30:50.524 [2024-11-28 10:03:29.243300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.524 [2024-11-28 10:03:29.247318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.524 [2024-11-28 10:03:29.247420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:50.524 [2024-11-28 10:03:29.247435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.983 ms 00:30:50.524 [2024-11-28 10:03:29.247447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.524 [2024-11-28 10:03:29.266019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.524 [2024-11-28 10:03:29.266043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:50.524 [2024-11-28 10:03:29.266051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.559 ms 00:30:50.524 [2024-11-28 10:03:29.266058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.524 [2024-11-28 10:03:29.283926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.524 [2024-11-28 10:03:29.283948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:50.524 [2024-11-28 10:03:29.283955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.842 ms 00:30:50.524 [2024-11-28 10:03:29.283962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.524 [2024-11-28 10:03:29.301466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.524 [2024-11-28 10:03:29.301488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:50.524 [2024-11-28 10:03:29.301496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.479 ms 00:30:50.524 [2024-11-28 10:03:29.301502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.524 [2024-11-28 10:03:29.318738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.524 [2024-11-28 10:03:29.318759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:50.524 [2024-11-28 10:03:29.318767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.192 ms 00:30:50.524 [2024-11-28 10:03:29.318773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.524 [2024-11-28 10:03:29.318797] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:50.524 [2024-11-28 10:03:29.318809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.524 [2024-11-28 10:03:29.318817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:50.524 [2024-11-28 10:03:29.318824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.318998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:50.524 [2024-11-28 10:03:29.319242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:50.525 [2024-11-28 10:03:29.319411] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:50.525 [2024-11-28 10:03:29.319418] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7a4e906c-1038-4a20-9093-7ce808a69d46 00:30:50.525 [2024-11-28 10:03:29.319424] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:50.525 [2024-11-28 10:03:29.319430] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 173248 00:30:50.525 [2024-11-28 10:03:29.319439] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 171264 00:30:50.525 [2024-11-28 10:03:29.319444] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0116 00:30:50.525 [2024-11-28 10:03:29.319451] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:50.525 [2024-11-28 10:03:29.319461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:50.525 [2024-11-28 10:03:29.319467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:50.525 [2024-11-28 10:03:29.319472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:50.525 [2024-11-28 10:03:29.319477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:50.525 [2024-11-28 10:03:29.319483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.525 [2024-11-28 10:03:29.319488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:50.525 [2024-11-28 10:03:29.319495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:30:50.525 [2024-11-28 10:03:29.319501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.525 [2024-11-28 10:03:29.329391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.525 [2024-11-28 10:03:29.329411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:50.525 [2024-11-28 10:03:29.329419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.877 ms 00:30:50.525 [2024-11-28 10:03:29.329425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.525 [2024-11-28 10:03:29.329713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.525 [2024-11-28 10:03:29.329720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:50.525 [2024-11-28 10:03:29.329727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:30:50.525 [2024-11-28 10:03:29.329733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.525 [2024-11-28 10:03:29.357057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.525 [2024-11-28 10:03:29.357080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:50.525 [2024-11-28 10:03:29.357088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.525 [2024-11-28 10:03:29.357094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.525 [2024-11-28 10:03:29.357132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.525 [2024-11-28 10:03:29.357138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:50.525 [2024-11-28 10:03:29.357144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.525 [2024-11-28 10:03:29.357158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.525 [2024-11-28 10:03:29.357201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.525 [2024-11-28 10:03:29.357209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:50.525 [2024-11-28 10:03:29.357215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.525 [2024-11-28 10:03:29.357221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.525 [2024-11-28 10:03:29.357233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.525 [2024-11-28 10:03:29.357239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:50.525 [2024-11-28 10:03:29.357245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.525 [2024-11-28 10:03:29.357251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.419389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.419418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:50.787 [2024-11-28 10:03:29.419427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.419433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:50.787 [2024-11-28 10:03:29.470569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:50.787 [2024-11-28 10:03:29.470660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:50.787 [2024-11-28 10:03:29.470708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:50.787 [2024-11-28 10:03:29.470804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:50.787 [2024-11-28 10:03:29.470849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:50.787 [2024-11-28 10:03:29.470907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.470956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.787 [2024-11-28 10:03:29.470965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:50.787 [2024-11-28 10:03:29.470971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.787 [2024-11-28 10:03:29.470977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.787 [2024-11-28 10:03:29.471086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.826 ms, result 0 00:30:51.359 00:30:51.359 00:30:51.359 10:03:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:53.903 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:53.904 10:03:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:53.904 [2024-11-28 10:03:32.268474] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:30:53.904 [2024-11-28 10:03:32.268756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83670 ] 00:30:53.904 [2024-11-28 10:03:32.431895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.904 [2024-11-28 10:03:32.575027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.166 [2024-11-28 10:03:32.888605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:54.166 [2024-11-28 10:03:32.888677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:54.429 [2024-11-28 10:03:33.053881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.053951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:54.429 [2024-11-28 10:03:33.053968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:54.429 [2024-11-28 10:03:33.053978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.054036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.054049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:54.429 [2024-11-28 10:03:33.054059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:54.429 [2024-11-28 10:03:33.054068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.054090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:54.429 [2024-11-28 10:03:33.054888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:54.429 [2024-11-28 10:03:33.054928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.054937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:54.429 [2024-11-28 10:03:33.054950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:30:54.429 [2024-11-28 10:03:33.054959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.057236] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:54.429 [2024-11-28 10:03:33.072791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.072842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:54.429 [2024-11-28 10:03:33.072856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.557 ms 00:30:54.429 [2024-11-28 10:03:33.072866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.072952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.072963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:54.429 [2024-11-28 10:03:33.072973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:54.429 [2024-11-28 10:03:33.072982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.084453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.084495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:54.429 [2024-11-28 10:03:33.084507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.388 ms 00:30:54.429 [2024-11-28 10:03:33.084523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.084613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.084624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:54.429 [2024-11-28 10:03:33.084633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:54.429 [2024-11-28 10:03:33.084642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.084702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.084715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:54.429 [2024-11-28 10:03:33.084724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:54.429 [2024-11-28 10:03:33.084732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.084760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:54.429 [2024-11-28 10:03:33.089295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.089337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:54.429 [2024-11-28 10:03:33.089352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:30:54.429 [2024-11-28 10:03:33.089361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.089398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.429 [2024-11-28 10:03:33.089407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:54.429 [2024-11-28 10:03:33.089417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:54.429 [2024-11-28 10:03:33.089425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.429 [2024-11-28 10:03:33.089464] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:54.429 [2024-11-28 10:03:33.089491] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:54.429 [2024-11-28 10:03:33.089534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:54.429 [2024-11-28 10:03:33.089557] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:54.429 [2024-11-28 10:03:33.089669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:54.429 [2024-11-28 10:03:33.089683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:54.429 [2024-11-28 10:03:33.089695] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:54.429 [2024-11-28 10:03:33.089707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:54.429 [2024-11-28 10:03:33.089717] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:54.429 [2024-11-28 10:03:33.089725] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:54.430 [2024-11-28 10:03:33.089736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:54.430 [2024-11-28 10:03:33.089748] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:54.430 [2024-11-28 10:03:33.089757] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:54.430 [2024-11-28 10:03:33.089766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.430 [2024-11-28 10:03:33.089775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:54.430 [2024-11-28 10:03:33.089784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:30:54.430 [2024-11-28 10:03:33.089791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.430 [2024-11-28 10:03:33.089876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.430 [2024-11-28 10:03:33.089888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:54.430 [2024-11-28 10:03:33.089898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:54.430 [2024-11-28 10:03:33.089910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.430 [2024-11-28 10:03:33.090021] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:54.430 [2024-11-28 10:03:33.090044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:54.430 [2024-11-28 10:03:33.090056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:54.430 [2024-11-28 10:03:33.090083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:54.430 [2024-11-28 10:03:33.090123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:54.430 [2024-11-28 10:03:33.090138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:54.430 [2024-11-28 10:03:33.090145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:54.430 [2024-11-28 10:03:33.090171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:54.430 [2024-11-28 10:03:33.090192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:54.430 [2024-11-28 10:03:33.090201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:54.430 [2024-11-28 10:03:33.090209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:54.430 [2024-11-28 10:03:33.090224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:54.430 [2024-11-28 10:03:33.090248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:54.430 [2024-11-28 10:03:33.090271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:54.430 [2024-11-28 10:03:33.090292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:54.430 [2024-11-28 10:03:33.090314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:54.430 [2024-11-28 10:03:33.090336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:54.430 [2024-11-28 10:03:33.090351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:54.430 [2024-11-28 10:03:33.090358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:54.430 [2024-11-28 10:03:33.090364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:54.430 [2024-11-28 10:03:33.090373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:54.430 [2024-11-28 10:03:33.090382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:54.430 [2024-11-28 10:03:33.090390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:54.430 [2024-11-28 10:03:33.090403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:54.430 [2024-11-28 10:03:33.090411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090417] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:54.430 [2024-11-28 10:03:33.090425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:54.430 [2024-11-28 10:03:33.090437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.430 [2024-11-28 10:03:33.090454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:54.430 [2024-11-28 10:03:33.090461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:54.430 [2024-11-28 10:03:33.090469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:54.430 [2024-11-28 10:03:33.090476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:54.430 [2024-11-28 10:03:33.090484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:54.430 [2024-11-28 10:03:33.090493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:54.430 [2024-11-28 10:03:33.090502] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:54.430 [2024-11-28 10:03:33.090513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:54.430 [2024-11-28 10:03:33.090535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:54.430 [2024-11-28 10:03:33.090545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:54.430 [2024-11-28 10:03:33.090553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:54.430 [2024-11-28 10:03:33.090561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:54.430 [2024-11-28 10:03:33.090568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:54.430 [2024-11-28 10:03:33.090576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:54.430 [2024-11-28 10:03:33.090585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:54.430 [2024-11-28 10:03:33.090593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:54.430 [2024-11-28 10:03:33.090601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:54.430 [2024-11-28 10:03:33.090640] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:54.430 [2024-11-28 10:03:33.090651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:54.430 [2024-11-28 10:03:33.090669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:54.430 [2024-11-28 10:03:33.090677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:54.430 [2024-11-28 10:03:33.090685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:54.430 [2024-11-28 10:03:33.090693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.430 [2024-11-28 10:03:33.090704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:54.430 [2024-11-28 10:03:33.090714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:30:54.430 [2024-11-28 10:03:33.090722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.430 [2024-11-28 10:03:33.129065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.430 [2024-11-28 10:03:33.129119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:54.430 [2024-11-28 10:03:33.129132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.294 ms 00:30:54.430 [2024-11-28 10:03:33.129146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.430 [2024-11-28 10:03:33.129262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.430 [2024-11-28 10:03:33.129273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:54.430 [2024-11-28 10:03:33.129284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:54.430 [2024-11-28 10:03:33.129293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.430 [2024-11-28 10:03:33.180052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.430 [2024-11-28 10:03:33.180108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:54.431 [2024-11-28 10:03:33.180122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.693 ms 00:30:54.431 [2024-11-28 10:03:33.180132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.180200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.180213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:54.431 [2024-11-28 10:03:33.180228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:54.431 [2024-11-28 10:03:33.180236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.180984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.181028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:54.431 [2024-11-28 10:03:33.181040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:30:54.431 [2024-11-28 10:03:33.181049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.181241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.181255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:54.431 [2024-11-28 10:03:33.181272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:30:54.431 [2024-11-28 10:03:33.181281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.199441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.199489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:54.431 [2024-11-28 10:03:33.199501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:30:54.431 [2024-11-28 10:03:33.199510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.214759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:54.431 [2024-11-28 10:03:33.214809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:54.431 [2024-11-28 10:03:33.214823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.214835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:54.431 [2024-11-28 10:03:33.214846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.199 ms 00:30:54.431 [2024-11-28 10:03:33.214855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.241554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.241603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:54.431 [2024-11-28 10:03:33.241616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.641 ms 00:30:54.431 [2024-11-28 10:03:33.241626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.254781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.254829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:54.431 [2024-11-28 10:03:33.254842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.087 ms 00:30:54.431 [2024-11-28 10:03:33.254850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.267428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.267474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:54.431 [2024-11-28 10:03:33.267485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.528 ms 00:30:54.431 [2024-11-28 10:03:33.267494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.431 [2024-11-28 10:03:33.268173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.431 [2024-11-28 10:03:33.268208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:54.431 [2024-11-28 10:03:33.268224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:30:54.431 [2024-11-28 10:03:33.268233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.692 [2024-11-28 10:03:33.341702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.692 [2024-11-28 10:03:33.341764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:54.692 [2024-11-28 10:03:33.341789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.447 ms 00:30:54.692 [2024-11-28 10:03:33.341799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.692 [2024-11-28 10:03:33.354045] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:54.693 [2024-11-28 10:03:33.357985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.358031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:54.693 [2024-11-28 10:03:33.358044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.128 ms 00:30:54.693 [2024-11-28 10:03:33.358054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.358181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.358196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:54.693 [2024-11-28 10:03:33.358212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:54.693 [2024-11-28 10:03:33.358222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.359331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.359376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:54.693 [2024-11-28 10:03:33.359388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:30:54.693 [2024-11-28 10:03:33.359398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.359436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.359447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:54.693 [2024-11-28 10:03:33.359457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:54.693 [2024-11-28 10:03:33.359466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.359515] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:54.693 [2024-11-28 10:03:33.359530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.359540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:54.693 [2024-11-28 10:03:33.359551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:54.693 [2024-11-28 10:03:33.359560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.385472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.385520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:54.693 [2024-11-28 10:03:33.385540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.893 ms 00:30:54.693 [2024-11-28 10:03:33.385549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.385640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.693 [2024-11-28 10:03:33.385653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:54.693 [2024-11-28 10:03:33.385663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:54.693 [2024-11-28 10:03:33.385673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.693 [2024-11-28 10:03:33.387211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.733 ms, result 0 00:30:56.111  [2024-11-28T10:03:35.934Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-28T10:03:36.880Z] Copying: 33/1024 [MB] (15 MBps) [2024-11-28T10:03:37.825Z] Copying: 54/1024 [MB] (21 MBps) [2024-11-28T10:03:38.770Z] Copying: 76/1024 [MB] (22 MBps) [2024-11-28T10:03:39.716Z] Copying: 89/1024 [MB] (12 MBps) [2024-11-28T10:03:40.658Z] Copying: 106/1024 [MB] (16 MBps) [2024-11-28T10:03:41.601Z] Copying: 119/1024 [MB] (12 MBps) [2024-11-28T10:03:42.987Z] Copying: 135/1024 [MB] (16 MBps) [2024-11-28T10:03:43.931Z] Copying: 157/1024 [MB] (21 MBps) [2024-11-28T10:03:44.875Z] Copying: 179/1024 [MB] (22 MBps) [2024-11-28T10:03:45.818Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-28T10:03:46.762Z] Copying: 202/1024 [MB] (11 MBps) [2024-11-28T10:03:47.706Z] Copying: 213/1024 [MB] (11 MBps) [2024-11-28T10:03:48.650Z] Copying: 225/1024 [MB] (11 MBps) [2024-11-28T10:03:49.594Z] Copying: 236/1024 [MB] (11 MBps) [2024-11-28T10:03:50.978Z] Copying: 248/1024 [MB] (11 MBps) [2024-11-28T10:03:51.924Z] Copying: 259/1024 [MB] (11 MBps) [2024-11-28T10:03:52.869Z] Copying: 271/1024 [MB] (11 MBps) [2024-11-28T10:03:53.815Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-28T10:03:54.761Z] Copying: 292/1024 [MB] (10 MBps) [2024-11-28T10:03:55.706Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-28T10:03:56.650Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-28T10:03:57.597Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-28T10:03:58.617Z] Copying: 337/1024 [MB] (11 MBps) [2024-11-28T10:04:00.004Z] Copying: 349/1024 [MB] (11 MBps) [2024-11-28T10:04:00.576Z] Copying: 360/1024 [MB] (11 MBps) [2024-11-28T10:04:01.959Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-28T10:04:02.903Z] Copying: 383/1024 [MB] (11 MBps) [2024-11-28T10:04:03.846Z] Copying: 394/1024 [MB] (11 MBps) [2024-11-28T10:04:04.791Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-28T10:04:05.734Z] Copying: 417/1024 [MB] (10 MBps) [2024-11-28T10:04:06.679Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-28T10:04:07.626Z] Copying: 443/1024 [MB] (14 MBps) [2024-11-28T10:04:09.014Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-28T10:04:09.587Z] Copying: 465/1024 [MB] (11 MBps) [2024-11-28T10:04:10.976Z] Copying: 477/1024 [MB] (11 MBps) [2024-11-28T10:04:11.921Z] Copying: 489/1024 [MB] (11 MBps) [2024-11-28T10:04:12.865Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-28T10:04:13.810Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-28T10:04:14.755Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-28T10:04:15.698Z] Copying: 534/1024 [MB] (11 MBps) [2024-11-28T10:04:16.642Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-28T10:04:17.589Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-28T10:04:18.982Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-28T10:04:19.933Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-28T10:04:20.876Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-28T10:04:21.822Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-28T10:04:22.768Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-28T10:04:23.713Z] Copying: 623/1024 [MB] (10 MBps) [2024-11-28T10:04:24.657Z] Copying: 635/1024 [MB] (11 MBps) [2024-11-28T10:04:25.600Z] Copying: 646/1024 [MB] (11 MBps) [2024-11-28T10:04:26.985Z] Copying: 657/1024 [MB] (11 MBps) [2024-11-28T10:04:27.930Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-28T10:04:28.875Z] Copying: 679/1024 [MB] (11 MBps) [2024-11-28T10:04:29.852Z] Copying: 691/1024 [MB] (11 MBps) [2024-11-28T10:04:30.837Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-28T10:04:31.782Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-28T10:04:32.726Z] Copying: 725/1024 [MB] (11 MBps) [2024-11-28T10:04:33.671Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-28T10:04:34.618Z] Copying: 748/1024 [MB] (11 MBps) [2024-11-28T10:04:36.004Z] Copying: 759/1024 [MB] (11 MBps) [2024-11-28T10:04:36.575Z] Copying: 771/1024 [MB] (11 MBps) [2024-11-28T10:04:37.957Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-28T10:04:38.903Z] Copying: 799/1024 [MB] (16 MBps) [2024-11-28T10:04:39.847Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-28T10:04:40.793Z] Copying: 820/1024 [MB] (11 MBps) [2024-11-28T10:04:41.738Z] Copying: 832/1024 [MB] (11 MBps) [2024-11-28T10:04:42.683Z] Copying: 845/1024 [MB] (13 MBps) [2024-11-28T10:04:43.626Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-28T10:04:44.571Z] Copying: 872/1024 [MB] (15 MBps) [2024-11-28T10:04:45.959Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-28T10:04:46.904Z] Copying: 896/1024 [MB] (12 MBps) [2024-11-28T10:04:47.851Z] Copying: 908/1024 [MB] (11 MBps) [2024-11-28T10:04:48.798Z] Copying: 921/1024 [MB] (12 MBps) [2024-11-28T10:04:49.745Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-28T10:04:50.690Z] Copying: 942/1024 [MB] (10 MBps) [2024-11-28T10:04:51.633Z] Copying: 953/1024 [MB] (10 MBps) [2024-11-28T10:04:52.579Z] Copying: 968/1024 [MB] (15 MBps) [2024-11-28T10:04:53.966Z] Copying: 984/1024 [MB] (16 MBps) [2024-11-28T10:04:54.910Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-28T10:04:55.854Z] Copying: 1007/1024 [MB] (12 MBps) [2024-11-28T10:04:56.116Z] Copying: 1019/1024 [MB] (11 MBps) [2024-11-28T10:04:56.116Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-28 10:04:56.110139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.236 [2024-11-28 10:04:56.110591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:17.236 [2024-11-28 10:04:56.110634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:17.236 [2024-11-28 10:04:56.110648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.236 [2024-11-28 10:04:56.110701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:17.236 [2024-11-28 10:04:56.115011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.236 [2024-11-28 10:04:56.115072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:17.236 [2024-11-28 10:04:56.115084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:32:17.236 [2024-11-28 10:04:56.115094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.498 [2024-11-28 10:04:56.115357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.498 [2024-11-28 10:04:56.115375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:17.499 [2024-11-28 10:04:56.115386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:32:17.499 [2024-11-28 10:04:56.115395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.119694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.119727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:17.499 [2024-11-28 10:04:56.119739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:32:17.499 [2024-11-28 10:04:56.119755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.126206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.126257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:17.499 [2024-11-28 10:04:56.126269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.428 ms 00:32:17.499 [2024-11-28 10:04:56.126278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.154795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.154849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:17.499 [2024-11-28 10:04:56.154863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.441 ms 00:32:17.499 [2024-11-28 10:04:56.154872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.171599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.171652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:17.499 [2024-11-28 10:04:56.171666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.675 ms 00:32:17.499 [2024-11-28 10:04:56.171676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.176353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.176403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:17.499 [2024-11-28 10:04:56.176414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:32:17.499 [2024-11-28 10:04:56.176423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.202653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.202700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:17.499 [2024-11-28 10:04:56.202712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.214 ms 00:32:17.499 [2024-11-28 10:04:56.202720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.228678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.228724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:17.499 [2024-11-28 10:04:56.228737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.911 ms 00:32:17.499 [2024-11-28 10:04:56.228745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.253937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.253983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:17.499 [2024-11-28 10:04:56.253996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.145 ms 00:32:17.499 [2024-11-28 10:04:56.254004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.279071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.499 [2024-11-28 10:04:56.279119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:17.499 [2024-11-28 10:04:56.279130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.974 ms 00:32:17.499 [2024-11-28 10:04:56.279138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.499 [2024-11-28 10:04:56.279200] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:17.499 [2024-11-28 10:04:56.279225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:17.499 [2024-11-28 10:04:56.279241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:17.499 [2024-11-28 10:04:56.279250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:17.499 [2024-11-28 10:04:56.279676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.279995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.280002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.280010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.280018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.280026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:17.500 [2024-11-28 10:04:56.280041] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:17.500 [2024-11-28 10:04:56.280050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7a4e906c-1038-4a20-9093-7ce808a69d46 00:32:17.500 [2024-11-28 10:04:56.280058] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:17.500 [2024-11-28 10:04:56.280066] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:17.500 [2024-11-28 10:04:56.280075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:17.500 [2024-11-28 10:04:56.280084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:17.500 [2024-11-28 10:04:56.280099] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:17.500 [2024-11-28 10:04:56.280108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:17.500 [2024-11-28 10:04:56.280116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:17.500 [2024-11-28 10:04:56.280124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:17.500 [2024-11-28 10:04:56.280129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:17.500 [2024-11-28 10:04:56.280137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.500 [2024-11-28 10:04:56.280145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:17.500 [2024-11-28 10:04:56.280168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:32:17.500 [2024-11-28 10:04:56.280179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.500 [2024-11-28 10:04:56.294935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.500 [2024-11-28 10:04:56.294982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:17.500 [2024-11-28 10:04:56.295005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.722 ms 00:32:17.500 [2024-11-28 10:04:56.295014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.500 [2024-11-28 10:04:56.295453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.500 [2024-11-28 10:04:56.295474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:17.500 [2024-11-28 10:04:56.295483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:32:17.500 [2024-11-28 10:04:56.295490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.500 [2024-11-28 10:04:56.335147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.500 [2024-11-28 10:04:56.335207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:17.500 [2024-11-28 10:04:56.335221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.500 [2024-11-28 10:04:56.335229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.500 [2024-11-28 10:04:56.335292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.500 [2024-11-28 10:04:56.335308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:17.500 [2024-11-28 10:04:56.335317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.500 [2024-11-28 10:04:56.335325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.500 [2024-11-28 10:04:56.335422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.500 [2024-11-28 10:04:56.335436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:17.500 [2024-11-28 10:04:56.335445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.500 [2024-11-28 10:04:56.335453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.500 [2024-11-28 10:04:56.335469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.500 [2024-11-28 10:04:56.335479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:17.500 [2024-11-28 10:04:56.335491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.500 [2024-11-28 10:04:56.335499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.761 [2024-11-28 10:04:56.428013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.761 [2024-11-28 10:04:56.428078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:17.761 [2024-11-28 10:04:56.428093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.761 [2024-11-28 10:04:56.428103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.761 [2024-11-28 10:04:56.503749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.761 [2024-11-28 10:04:56.503819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:17.761 [2024-11-28 10:04:56.503831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.761 [2024-11-28 10:04:56.503841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.761 [2024-11-28 10:04:56.503916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.761 [2024-11-28 10:04:56.503926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.761 [2024-11-28 10:04:56.503936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.761 [2024-11-28 10:04:56.503945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.761 [2024-11-28 10:04:56.504012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.761 [2024-11-28 10:04:56.504025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.761 [2024-11-28 10:04:56.504035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.761 [2024-11-28 10:04:56.504050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.761 [2024-11-28 10:04:56.504186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.761 [2024-11-28 10:04:56.504200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.761 [2024-11-28 10:04:56.504210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.762 [2024-11-28 10:04:56.504220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.762 [2024-11-28 10:04:56.504261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.762 [2024-11-28 10:04:56.504274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:17.762 [2024-11-28 10:04:56.504283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.762 [2024-11-28 10:04:56.504292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.762 [2024-11-28 10:04:56.504350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.762 [2024-11-28 10:04:56.504360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.762 [2024-11-28 10:04:56.504370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.762 [2024-11-28 10:04:56.504380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.762 [2024-11-28 10:04:56.504439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.762 [2024-11-28 10:04:56.504452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.762 [2024-11-28 10:04:56.504461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.762 [2024-11-28 10:04:56.504473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.762 [2024-11-28 10:04:56.504642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.467 ms, result 0 00:32:18.706 00:32:18.706 00:32:18.706 10:04:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:20.624 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:20.624 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:20.624 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:20.624 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:20.624 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81406 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81406 ']' 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81406 00:32:20.886 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81406) - No such process 00:32:20.886 Process with pid 81406 is not found 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81406 is not found' 00:32:20.886 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:21.148 Remove shared memory files 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:21.148 00:32:21.148 real 5m3.853s 00:32:21.148 user 5m16.587s 00:32:21.148 sys 0m23.807s 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.148 10:04:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:21.148 ************************************ 00:32:21.148 END TEST ftl_dirty_shutdown 00:32:21.148 ************************************ 00:32:21.410 10:05:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:21.410 10:05:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:21.410 10:05:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.410 10:05:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:21.410 ************************************ 00:32:21.410 START TEST ftl_upgrade_shutdown 00:32:21.410 ************************************ 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:21.410 * Looking for test storage... 00:32:21.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:21.410 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:21.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.411 --rc genhtml_branch_coverage=1 00:32:21.411 --rc genhtml_function_coverage=1 00:32:21.411 --rc genhtml_legend=1 00:32:21.411 --rc geninfo_all_blocks=1 00:32:21.411 --rc geninfo_unexecuted_blocks=1 00:32:21.411 00:32:21.411 ' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:21.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.411 --rc genhtml_branch_coverage=1 00:32:21.411 --rc genhtml_function_coverage=1 00:32:21.411 --rc genhtml_legend=1 00:32:21.411 --rc geninfo_all_blocks=1 00:32:21.411 --rc geninfo_unexecuted_blocks=1 00:32:21.411 00:32:21.411 ' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:21.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.411 --rc genhtml_branch_coverage=1 00:32:21.411 --rc genhtml_function_coverage=1 00:32:21.411 --rc genhtml_legend=1 00:32:21.411 --rc geninfo_all_blocks=1 00:32:21.411 --rc geninfo_unexecuted_blocks=1 00:32:21.411 00:32:21.411 ' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:21.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.411 --rc genhtml_branch_coverage=1 00:32:21.411 --rc genhtml_function_coverage=1 00:32:21.411 --rc genhtml_legend=1 00:32:21.411 --rc geninfo_all_blocks=1 00:32:21.411 --rc geninfo_unexecuted_blocks=1 00:32:21.411 00:32:21.411 ' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84638 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84638 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84638 ']' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.411 10:05:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:21.672 [2024-11-28 10:05:00.354281] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:21.672 [2024-11-28 10:05:00.354431] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84638 ] 00:32:21.672 [2024-11-28 10:05:00.516823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.938 [2024-11-28 10:05:00.667017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:22.584 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:22.845 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:23.107 { 00:32:23.107 "name": "basen1", 00:32:23.107 "aliases": [ 00:32:23.107 "6f2520c2-9985-4207-bda5-dc321a5c96a0" 00:32:23.107 ], 00:32:23.107 "product_name": "NVMe disk", 00:32:23.107 "block_size": 4096, 00:32:23.107 "num_blocks": 1310720, 00:32:23.107 "uuid": "6f2520c2-9985-4207-bda5-dc321a5c96a0", 00:32:23.107 "numa_id": -1, 00:32:23.107 "assigned_rate_limits": { 00:32:23.107 "rw_ios_per_sec": 0, 00:32:23.107 "rw_mbytes_per_sec": 0, 00:32:23.107 "r_mbytes_per_sec": 0, 00:32:23.107 "w_mbytes_per_sec": 0 00:32:23.107 }, 00:32:23.107 "claimed": true, 00:32:23.107 "claim_type": "read_many_write_one", 00:32:23.107 "zoned": false, 00:32:23.107 "supported_io_types": { 00:32:23.107 "read": true, 00:32:23.107 "write": true, 00:32:23.107 "unmap": true, 00:32:23.107 "flush": true, 00:32:23.107 "reset": true, 00:32:23.107 "nvme_admin": true, 00:32:23.107 "nvme_io": true, 00:32:23.107 "nvme_io_md": false, 00:32:23.107 "write_zeroes": true, 00:32:23.107 "zcopy": false, 00:32:23.107 "get_zone_info": false, 00:32:23.107 "zone_management": false, 00:32:23.107 "zone_append": false, 00:32:23.107 "compare": true, 00:32:23.107 "compare_and_write": false, 00:32:23.107 "abort": true, 00:32:23.107 "seek_hole": false, 00:32:23.107 "seek_data": false, 00:32:23.107 "copy": true, 00:32:23.107 "nvme_iov_md": false 00:32:23.107 }, 00:32:23.107 "driver_specific": { 00:32:23.107 "nvme": [ 00:32:23.107 { 00:32:23.107 "pci_address": "0000:00:11.0", 00:32:23.107 "trid": { 00:32:23.107 "trtype": "PCIe", 00:32:23.107 "traddr": "0000:00:11.0" 00:32:23.107 }, 00:32:23.107 "ctrlr_data": { 00:32:23.107 "cntlid": 0, 00:32:23.107 "vendor_id": "0x1b36", 00:32:23.107 "model_number": "QEMU NVMe Ctrl", 00:32:23.107 "serial_number": "12341", 00:32:23.107 "firmware_revision": "8.0.0", 00:32:23.107 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:23.107 "oacs": { 00:32:23.107 "security": 0, 00:32:23.107 "format": 1, 00:32:23.107 "firmware": 0, 00:32:23.107 "ns_manage": 1 00:32:23.107 }, 00:32:23.107 "multi_ctrlr": false, 00:32:23.107 "ana_reporting": false 00:32:23.107 }, 00:32:23.107 "vs": { 00:32:23.107 "nvme_version": "1.4" 00:32:23.107 }, 00:32:23.107 "ns_data": { 00:32:23.107 "id": 1, 00:32:23.107 "can_share": false 00:32:23.107 } 00:32:23.107 } 00:32:23.107 ], 00:32:23.107 "mp_policy": "active_passive" 00:32:23.107 } 00:32:23.107 } 00:32:23.107 ]' 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:23.107 10:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:23.368 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=2633549e-ff47-4487-a328-d6310c1ba48c 00:32:23.368 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:23.368 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2633549e-ff47-4487-a328-d6310c1ba48c 00:32:23.629 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:23.891 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ca474e77-d0c4-4234-9027-cbaae4a00cbd 00:32:23.891 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ca474e77-d0c4-4234-9027-cbaae4a00cbd 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=33855e4e-0fe6-490d-993c-90aacd869221 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 33855e4e-0fe6-490d-993c-90aacd869221 ]] 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 33855e4e-0fe6-490d-993c-90aacd869221 5120 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=33855e4e-0fe6-490d-993c-90aacd869221 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 33855e4e-0fe6-490d-993c-90aacd869221 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=33855e4e-0fe6-490d-993c-90aacd869221 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:24.153 10:05:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33855e4e-0fe6-490d-993c-90aacd869221 00:32:24.414 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:24.414 { 00:32:24.414 "name": "33855e4e-0fe6-490d-993c-90aacd869221", 00:32:24.414 "aliases": [ 00:32:24.414 "lvs/basen1p0" 00:32:24.414 ], 00:32:24.414 "product_name": "Logical Volume", 00:32:24.414 "block_size": 4096, 00:32:24.414 "num_blocks": 5242880, 00:32:24.414 "uuid": "33855e4e-0fe6-490d-993c-90aacd869221", 00:32:24.414 "assigned_rate_limits": { 00:32:24.414 "rw_ios_per_sec": 0, 00:32:24.414 "rw_mbytes_per_sec": 0, 00:32:24.414 "r_mbytes_per_sec": 0, 00:32:24.414 "w_mbytes_per_sec": 0 00:32:24.414 }, 00:32:24.414 "claimed": false, 00:32:24.414 "zoned": false, 00:32:24.414 "supported_io_types": { 00:32:24.414 "read": true, 00:32:24.414 "write": true, 00:32:24.414 "unmap": true, 00:32:24.414 "flush": false, 00:32:24.414 "reset": true, 00:32:24.414 "nvme_admin": false, 00:32:24.414 "nvme_io": false, 00:32:24.414 "nvme_io_md": false, 00:32:24.414 "write_zeroes": true, 00:32:24.414 "zcopy": false, 00:32:24.414 "get_zone_info": false, 00:32:24.414 "zone_management": false, 00:32:24.414 "zone_append": false, 00:32:24.414 "compare": false, 00:32:24.414 "compare_and_write": false, 00:32:24.415 "abort": false, 00:32:24.415 "seek_hole": true, 00:32:24.415 "seek_data": true, 00:32:24.415 "copy": false, 00:32:24.415 "nvme_iov_md": false 00:32:24.415 }, 00:32:24.415 "driver_specific": { 00:32:24.415 "lvol": { 00:32:24.415 "lvol_store_uuid": "ca474e77-d0c4-4234-9027-cbaae4a00cbd", 00:32:24.415 "base_bdev": "basen1", 00:32:24.415 "thin_provision": true, 00:32:24.415 "num_allocated_clusters": 0, 00:32:24.415 "snapshot": false, 00:32:24.415 "clone": false, 00:32:24.415 "esnap_clone": false 00:32:24.415 } 00:32:24.415 } 00:32:24.415 } 00:32:24.415 ]' 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:24.415 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:24.675 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:24.675 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:24.675 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:24.938 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:24.938 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:24.938 10:05:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 33855e4e-0fe6-490d-993c-90aacd869221 -c cachen1p0 --l2p_dram_limit 2 00:32:24.938 [2024-11-28 10:05:03.742515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.938 [2024-11-28 10:05:03.742560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:24.938 [2024-11-28 10:05:03.742574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:24.938 [2024-11-28 10:05:03.742580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.938 [2024-11-28 10:05:03.742625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.938 [2024-11-28 10:05:03.742633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:24.938 [2024-11-28 10:05:03.742640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:24.938 [2024-11-28 10:05:03.742647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.938 [2024-11-28 10:05:03.742663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:24.938 [2024-11-28 10:05:03.743179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:24.938 [2024-11-28 10:05:03.743197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.938 [2024-11-28 10:05:03.743204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:24.938 [2024-11-28 10:05:03.743212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:32:24.938 [2024-11-28 10:05:03.743218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.938 [2024-11-28 10:05:03.743244] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8ad00f45-ab03-4a63-8071-370546f137fd 00:32:24.938 [2024-11-28 10:05:03.744497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.938 [2024-11-28 10:05:03.744523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:24.938 [2024-11-28 10:05:03.744532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:24.938 [2024-11-28 10:05:03.744540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.751255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.751283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:24.939 [2024-11-28 10:05:03.751290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.654 ms 00:32:24.939 [2024-11-28 10:05:03.751298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.751328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.751337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:24.939 [2024-11-28 10:05:03.751343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:24.939 [2024-11-28 10:05:03.751353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.751386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.751395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:24.939 [2024-11-28 10:05:03.751403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:24.939 [2024-11-28 10:05:03.751412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.751428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:24.939 [2024-11-28 10:05:03.754639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.754806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:24.939 [2024-11-28 10:05:03.754824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.212 ms 00:32:24.939 [2024-11-28 10:05:03.754831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.754856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.754862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:24.939 [2024-11-28 10:05:03.754870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:24.939 [2024-11-28 10:05:03.754876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.754897] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:24.939 [2024-11-28 10:05:03.755009] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:24.939 [2024-11-28 10:05:03.755022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:24.939 [2024-11-28 10:05:03.755030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:24.939 [2024-11-28 10:05:03.755040] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755048] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755056] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:24.939 [2024-11-28 10:05:03.755064] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:24.939 [2024-11-28 10:05:03.755072] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:24.939 [2024-11-28 10:05:03.755078] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:24.939 [2024-11-28 10:05:03.755086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.755092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:24.939 [2024-11-28 10:05:03.755100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:32:24.939 [2024-11-28 10:05:03.755106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.755187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.939 [2024-11-28 10:05:03.755201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:24.939 [2024-11-28 10:05:03.755209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:32:24.939 [2024-11-28 10:05:03.755215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.939 [2024-11-28 10:05:03.755296] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:24.939 [2024-11-28 10:05:03.755304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:24.939 [2024-11-28 10:05:03.755313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:24.939 [2024-11-28 10:05:03.755333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:24.939 [2024-11-28 10:05:03.755346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:24.939 [2024-11-28 10:05:03.755353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:24.939 [2024-11-28 10:05:03.755358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:24.939 [2024-11-28 10:05:03.755370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:24.939 [2024-11-28 10:05:03.755378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:24.939 [2024-11-28 10:05:03.755390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:24.939 [2024-11-28 10:05:03.755395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:24.939 [2024-11-28 10:05:03.755410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:24.939 [2024-11-28 10:05:03.755418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:24.939 [2024-11-28 10:05:03.755430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:24.939 [2024-11-28 10:05:03.755435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:24.939 [2024-11-28 10:05:03.755447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:24.939 [2024-11-28 10:05:03.755454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:24.939 [2024-11-28 10:05:03.755467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:24.939 [2024-11-28 10:05:03.755473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:24.939 [2024-11-28 10:05:03.755485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:24.939 [2024-11-28 10:05:03.755492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:24.939 [2024-11-28 10:05:03.755506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:24.939 [2024-11-28 10:05:03.755511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:24.939 [2024-11-28 10:05:03.755523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:24.939 [2024-11-28 10:05:03.755542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:24.939 [2024-11-28 10:05:03.755559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:24.939 [2024-11-28 10:05:03.755565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755570] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:24.939 [2024-11-28 10:05:03.755578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:24.939 [2024-11-28 10:05:03.755584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:24.939 [2024-11-28 10:05:03.755598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:24.939 [2024-11-28 10:05:03.755607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:24.939 [2024-11-28 10:05:03.755612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:24.939 [2024-11-28 10:05:03.755619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:24.939 [2024-11-28 10:05:03.755624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:24.939 [2024-11-28 10:05:03.755630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:24.939 [2024-11-28 10:05:03.755639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:24.939 [2024-11-28 10:05:03.755650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.939 [2024-11-28 10:05:03.755656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:24.939 [2024-11-28 10:05:03.755664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:24.939 [2024-11-28 10:05:03.755669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:24.939 [2024-11-28 10:05:03.755676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:24.939 [2024-11-28 10:05:03.755682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:24.939 [2024-11-28 10:05:03.755690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:24.939 [2024-11-28 10:05:03.755696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:24.939 [2024-11-28 10:05:03.755703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:24.939 [2024-11-28 10:05:03.755708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:24.940 [2024-11-28 10:05:03.755749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:24.940 [2024-11-28 10:05:03.755756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:24.940 [2024-11-28 10:05:03.755770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:24.940 [2024-11-28 10:05:03.755775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:24.940 [2024-11-28 10:05:03.755782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:24.940 [2024-11-28 10:05:03.755788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.940 [2024-11-28 10:05:03.755796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:24.940 [2024-11-28 10:05:03.755802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:32:24.940 [2024-11-28 10:05:03.755808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.940 [2024-11-28 10:05:03.755848] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:24.940 [2024-11-28 10:05:03.755860] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:29.149 [2024-11-28 10:05:07.313396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.313450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:29.149 [2024-11-28 10:05:07.313463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3557.535 ms 00:32:29.149 [2024-11-28 10:05:07.313475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.343942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.343994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:29.149 [2024-11-28 10:05:07.344007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.262 ms 00:32:29.149 [2024-11-28 10:05:07.344018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.344097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.344109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:29.149 [2024-11-28 10:05:07.344118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:29.149 [2024-11-28 10:05:07.344135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.380550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.380603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:29.149 [2024-11-28 10:05:07.380615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.354 ms 00:32:29.149 [2024-11-28 10:05:07.380626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.380667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.380679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:29.149 [2024-11-28 10:05:07.380688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:29.149 [2024-11-28 10:05:07.380699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.381412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.381481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:29.149 [2024-11-28 10:05:07.381502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:32:29.149 [2024-11-28 10:05:07.381514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.381567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.381584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:29.149 [2024-11-28 10:05:07.381595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:29.149 [2024-11-28 10:05:07.381608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.401794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.401848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:29.149 [2024-11-28 10:05:07.401860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.166 ms 00:32:29.149 [2024-11-28 10:05:07.401871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.424659] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:29.149 [2024-11-28 10:05:07.426417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.426464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:29.149 [2024-11-28 10:05:07.426482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.447 ms 00:32:29.149 [2024-11-28 10:05:07.426491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.461295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.461350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:29.149 [2024-11-28 10:05:07.461368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.750 ms 00:32:29.149 [2024-11-28 10:05:07.461378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.461502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.461513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:29.149 [2024-11-28 10:05:07.461529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:32:29.149 [2024-11-28 10:05:07.461538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.488130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.488192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:29.149 [2024-11-28 10:05:07.488210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.528 ms 00:32:29.149 [2024-11-28 10:05:07.488220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.514525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.514575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:29.149 [2024-11-28 10:05:07.514591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.238 ms 00:32:29.149 [2024-11-28 10:05:07.514600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.149 [2024-11-28 10:05:07.515243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.149 [2024-11-28 10:05:07.515292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:29.149 [2024-11-28 10:05:07.515315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:32:29.150 [2024-11-28 10:05:07.515324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.611253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.150 [2024-11-28 10:05:07.611304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:29.150 [2024-11-28 10:05:07.611324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.860 ms 00:32:29.150 [2024-11-28 10:05:07.611334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.640604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.150 [2024-11-28 10:05:07.640655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:29.150 [2024-11-28 10:05:07.640672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.160 ms 00:32:29.150 [2024-11-28 10:05:07.640682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.667882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.150 [2024-11-28 10:05:07.667932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:29.150 [2024-11-28 10:05:07.667948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.137 ms 00:32:29.150 [2024-11-28 10:05:07.667957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.695063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.150 [2024-11-28 10:05:07.695393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:29.150 [2024-11-28 10:05:07.695424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.045 ms 00:32:29.150 [2024-11-28 10:05:07.695435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.695610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.150 [2024-11-28 10:05:07.695641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:29.150 [2024-11-28 10:05:07.695661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:29.150 [2024-11-28 10:05:07.695670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.695807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.150 [2024-11-28 10:05:07.695825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:29.150 [2024-11-28 10:05:07.695838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:32:29.150 [2024-11-28 10:05:07.695848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.150 [2024-11-28 10:05:07.697287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3954.150 ms, result 0 00:32:29.150 { 00:32:29.150 "name": "ftl", 00:32:29.150 "uuid": "8ad00f45-ab03-4a63-8071-370546f137fd" 00:32:29.150 } 00:32:29.150 10:05:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:29.150 [2024-11-28 10:05:07.920091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.150 10:05:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:29.412 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:29.674 [2024-11-28 10:05:08.348460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:29.674 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:29.674 [2024-11-28 10:05:08.548741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:29.936 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:30.198 Fill FTL, iteration 1 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84760 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84760 /var/tmp/spdk.tgt.sock 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84760 ']' 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:30.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.198 10:05:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:30.198 [2024-11-28 10:05:08.961955] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:30.198 [2024-11-28 10:05:08.962255] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84760 ] 00:32:30.460 [2024-11-28 10:05:09.121953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.460 [2024-11-28 10:05:09.220931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.032 10:05:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.032 10:05:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:31.032 10:05:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:31.293 ftln1 00:32:31.293 10:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:31.293 10:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:31.553 10:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:31.553 10:05:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84760 00:32:31.553 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84760 ']' 00:32:31.553 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84760 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84760 00:32:31.554 killing process with pid 84760 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84760' 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84760 00:32:31.554 10:05:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84760 00:32:32.934 10:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:32.934 10:05:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:32.934 [2024-11-28 10:05:11.641088] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:32.934 [2024-11-28 10:05:11.641235] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84802 ] 00:32:32.934 [2024-11-28 10:05:11.796316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.192 [2024-11-28 10:05:11.872214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.565  [2024-11-28T10:05:14.382Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-28T10:05:15.317Z] Copying: 522/1024 [MB] (265 MBps) [2024-11-28T10:05:16.262Z] Copying: 778/1024 [MB] (256 MBps) [2024-11-28T10:05:16.830Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:32:37.950 00:32:37.950 Calculate MD5 checksum, iteration 1 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:37.950 10:05:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:37.950 [2024-11-28 10:05:16.800534] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:37.950 [2024-11-28 10:05:16.800648] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84861 ] 00:32:38.210 [2024-11-28 10:05:16.956010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.210 [2024-11-28 10:05:17.029778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.585  [2024-11-28T10:05:19.032Z] Copying: 666/1024 [MB] (666 MBps) [2024-11-28T10:05:19.601Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:32:40.721 00:32:40.721 10:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:40.721 10:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=68de6b9a4e308b77a92cb99a3849fc0e 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:42.633 Fill FTL, iteration 2 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:42.633 10:05:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:42.633 [2024-11-28 10:05:21.355589] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:42.633 [2024-11-28 10:05:21.355677] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84912 ] 00:32:42.633 [2024-11-28 10:05:21.504660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.892 [2024-11-28 10:05:21.587838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.268  [2024-11-28T10:05:24.084Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-28T10:05:25.018Z] Copying: 506/1024 [MB] (249 MBps) [2024-11-28T10:05:26.065Z] Copying: 771/1024 [MB] (265 MBps) [2024-11-28T10:05:26.640Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:32:47.760 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:47.760 Calculate MD5 checksum, iteration 2 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:47.760 10:05:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:47.760 [2024-11-28 10:05:26.546636] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:32:47.760 [2024-11-28 10:05:26.546754] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84970 ] 00:32:48.019 [2024-11-28 10:05:26.703310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.019 [2024-11-28 10:05:26.776905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.394  [2024-11-28T10:05:28.840Z] Copying: 646/1024 [MB] (646 MBps) [2024-11-28T10:05:29.778Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:32:50.898 00:32:50.898 10:05:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:50.898 10:05:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:52.808 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:52.808 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=de870a5a7096685d53f224c342da02a7 00:32:52.808 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:52.808 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:52.808 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:53.069 [2024-11-28 10:05:31.704785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.069 [2024-11-28 10:05:31.704840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:53.069 [2024-11-28 10:05:31.704853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:53.069 [2024-11-28 10:05:31.704861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.069 [2024-11-28 10:05:31.704880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.069 [2024-11-28 10:05:31.704890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:53.069 [2024-11-28 10:05:31.704897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:53.069 [2024-11-28 10:05:31.704903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.069 [2024-11-28 10:05:31.704919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.069 [2024-11-28 10:05:31.704926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:53.069 [2024-11-28 10:05:31.704933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:53.069 [2024-11-28 10:05:31.704940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.069 [2024-11-28 10:05:31.704995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.199 ms, result 0 00:32:53.069 true 00:32:53.069 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:53.069 { 00:32:53.069 "name": "ftl", 00:32:53.069 "properties": [ 00:32:53.069 { 00:32:53.069 "name": "superblock_version", 00:32:53.069 "value": 5, 00:32:53.069 "read-only": true 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "name": "base_device", 00:32:53.069 "bands": [ 00:32:53.069 { 00:32:53.069 "id": 0, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 1, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 2, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 3, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 4, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 5, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 6, 00:32:53.069 "state": "FREE", 00:32:53.069 "validity": 0.0 00:32:53.069 }, 00:32:53.069 { 00:32:53.069 "id": 7, 00:32:53.069 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 8, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 9, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 10, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 11, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 12, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 13, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 14, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 15, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 16, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 17, 00:32:53.070 "state": "FREE", 00:32:53.070 "validity": 0.0 00:32:53.070 } 00:32:53.070 ], 00:32:53.070 "read-only": true 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "name": "cache_device", 00:32:53.070 "type": "bdev", 00:32:53.070 "chunks": [ 00:32:53.070 { 00:32:53.070 "id": 0, 00:32:53.070 "state": "INACTIVE", 00:32:53.070 "utilization": 0.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 1, 00:32:53.070 "state": "CLOSED", 00:32:53.070 "utilization": 1.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 2, 00:32:53.070 "state": "CLOSED", 00:32:53.070 "utilization": 1.0 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 3, 00:32:53.070 "state": "OPEN", 00:32:53.070 "utilization": 0.001953125 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "id": 4, 00:32:53.070 "state": "OPEN", 00:32:53.070 "utilization": 0.0 00:32:53.070 } 00:32:53.070 ], 00:32:53.070 "read-only": true 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "name": "verbose_mode", 00:32:53.070 "value": true, 00:32:53.070 "unit": "", 00:32:53.070 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:53.070 }, 00:32:53.070 { 00:32:53.070 "name": "prep_upgrade_on_shutdown", 00:32:53.070 "value": false, 00:32:53.070 "unit": "", 00:32:53.070 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:53.070 } 00:32:53.070 ] 00:32:53.070 } 00:32:53.070 10:05:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:53.332 [2024-11-28 10:05:32.061048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.332 [2024-11-28 10:05:32.061080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:53.332 [2024-11-28 10:05:32.061089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:53.332 [2024-11-28 10:05:32.061096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.332 [2024-11-28 10:05:32.061113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.332 [2024-11-28 10:05:32.061121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:53.332 [2024-11-28 10:05:32.061127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:53.332 [2024-11-28 10:05:32.061133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.332 [2024-11-28 10:05:32.061148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.332 [2024-11-28 10:05:32.061167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:53.332 [2024-11-28 10:05:32.061174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:53.332 [2024-11-28 10:05:32.061180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.332 [2024-11-28 10:05:32.061224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.165 ms, result 0 00:32:53.332 true 00:32:53.332 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:53.332 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:53.332 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:53.593 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:53.593 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:53.593 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:53.593 [2024-11-28 10:05:32.425338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.593 [2024-11-28 10:05:32.425368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:53.593 [2024-11-28 10:05:32.425376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:53.593 [2024-11-28 10:05:32.425382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.593 [2024-11-28 10:05:32.425399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.593 [2024-11-28 10:05:32.425406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:53.593 [2024-11-28 10:05:32.425412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:53.593 [2024-11-28 10:05:32.425418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.593 [2024-11-28 10:05:32.425433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.593 [2024-11-28 10:05:32.425439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:53.593 [2024-11-28 10:05:32.425446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:53.593 [2024-11-28 10:05:32.425452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.593 [2024-11-28 10:05:32.425491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.143 ms, result 0 00:32:53.593 true 00:32:53.593 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:53.854 { 00:32:53.854 "name": "ftl", 00:32:53.854 "properties": [ 00:32:53.854 { 00:32:53.854 "name": "superblock_version", 00:32:53.854 "value": 5, 00:32:53.854 "read-only": true 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "name": "base_device", 00:32:53.854 "bands": [ 00:32:53.854 { 00:32:53.854 "id": 0, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 1, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 2, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 3, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 4, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 5, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 6, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 7, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 8, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 9, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 10, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 11, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 12, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 13, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 14, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 15, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 16, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 17, 00:32:53.854 "state": "FREE", 00:32:53.854 "validity": 0.0 00:32:53.854 } 00:32:53.854 ], 00:32:53.854 "read-only": true 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "name": "cache_device", 00:32:53.854 "type": "bdev", 00:32:53.854 "chunks": [ 00:32:53.854 { 00:32:53.854 "id": 0, 00:32:53.854 "state": "INACTIVE", 00:32:53.854 "utilization": 0.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 1, 00:32:53.854 "state": "CLOSED", 00:32:53.854 "utilization": 1.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 2, 00:32:53.854 "state": "CLOSED", 00:32:53.854 "utilization": 1.0 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 3, 00:32:53.854 "state": "OPEN", 00:32:53.854 "utilization": 0.001953125 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "id": 4, 00:32:53.854 "state": "OPEN", 00:32:53.854 "utilization": 0.0 00:32:53.854 } 00:32:53.854 ], 00:32:53.854 "read-only": true 00:32:53.854 }, 00:32:53.854 { 00:32:53.854 "name": "verbose_mode", 00:32:53.854 "value": true, 00:32:53.854 "unit": "", 00:32:53.854 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:53.854 }, 00:32:53.855 { 00:32:53.855 "name": "prep_upgrade_on_shutdown", 00:32:53.855 "value": true, 00:32:53.855 "unit": "", 00:32:53.855 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:53.855 } 00:32:53.855 ] 00:32:53.855 } 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84638 ]] 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84638 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84638 ']' 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84638 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84638 00:32:53.855 killing process with pid 84638 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84638' 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84638 00:32:53.855 10:05:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84638 00:32:54.427 [2024-11-28 10:05:33.174863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:54.427 [2024-11-28 10:05:33.186557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.427 [2024-11-28 10:05:33.186593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:54.427 [2024-11-28 10:05:33.186606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:54.427 [2024-11-28 10:05:33.186612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.427 [2024-11-28 10:05:33.186632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:54.427 [2024-11-28 10:05:33.188862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.427 [2024-11-28 10:05:33.188887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:54.427 [2024-11-28 10:05:33.188896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.219 ms 00:32:54.427 [2024-11-28 10:05:33.188903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.484169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.484236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:04.434 [2024-11-28 10:05:41.484250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8295.211 ms 00:33:04.434 [2024-11-28 10:05:41.484262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.485081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.485099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:04.434 [2024-11-28 10:05:41.485107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.807 ms 00:33:04.434 [2024-11-28 10:05:41.485114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.485983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.486000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:04.434 [2024-11-28 10:05:41.486009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:33:04.434 [2024-11-28 10:05:41.486016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.494453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.494480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:04.434 [2024-11-28 10:05:41.494489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.396 ms 00:33:04.434 [2024-11-28 10:05:41.494495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.500245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.500273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:04.434 [2024-11-28 10:05:41.500282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.724 ms 00:33:04.434 [2024-11-28 10:05:41.500290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.500353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.500361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:04.434 [2024-11-28 10:05:41.500373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:33:04.434 [2024-11-28 10:05:41.500379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.434 [2024-11-28 10:05:41.508181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.434 [2024-11-28 10:05:41.508206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:04.434 [2024-11-28 10:05:41.508213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.789 ms 00:33:04.435 [2024-11-28 10:05:41.508218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.515337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.435 [2024-11-28 10:05:41.515362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:04.435 [2024-11-28 10:05:41.515369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.094 ms 00:33:04.435 [2024-11-28 10:05:41.515375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.522584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.435 [2024-11-28 10:05:41.522608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:04.435 [2024-11-28 10:05:41.522615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.186 ms 00:33:04.435 [2024-11-28 10:05:41.522621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.529717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.435 [2024-11-28 10:05:41.529742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:04.435 [2024-11-28 10:05:41.529749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.048 ms 00:33:04.435 [2024-11-28 10:05:41.529755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.529777] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:04.435 [2024-11-28 10:05:41.529796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:04.435 [2024-11-28 10:05:41.529805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:04.435 [2024-11-28 10:05:41.529811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:04.435 [2024-11-28 10:05:41.529818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:04.435 [2024-11-28 10:05:41.529910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:04.435 [2024-11-28 10:05:41.529916] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8ad00f45-ab03-4a63-8071-370546f137fd 00:33:04.435 [2024-11-28 10:05:41.529922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:04.435 [2024-11-28 10:05:41.529928] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:04.435 [2024-11-28 10:05:41.529933] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:04.435 [2024-11-28 10:05:41.529941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:04.435 [2024-11-28 10:05:41.529947] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:04.435 [2024-11-28 10:05:41.529956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:04.435 [2024-11-28 10:05:41.529963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:04.435 [2024-11-28 10:05:41.529969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:04.435 [2024-11-28 10:05:41.529975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:04.435 [2024-11-28 10:05:41.529981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.435 [2024-11-28 10:05:41.529990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:04.435 [2024-11-28 10:05:41.529996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:33:04.435 [2024-11-28 10:05:41.530002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.540077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.435 [2024-11-28 10:05:41.540102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:04.435 [2024-11-28 10:05:41.540111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.055 ms 00:33:04.435 [2024-11-28 10:05:41.540120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.540410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.435 [2024-11-28 10:05:41.540418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:04.435 [2024-11-28 10:05:41.540425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:33:04.435 [2024-11-28 10:05:41.540432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.574485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.435 [2024-11-28 10:05:41.574516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:04.435 [2024-11-28 10:05:41.574529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.435 [2024-11-28 10:05:41.574536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.574562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.435 [2024-11-28 10:05:41.574569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:04.435 [2024-11-28 10:05:41.574576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.435 [2024-11-28 10:05:41.574582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.574636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.435 [2024-11-28 10:05:41.574644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:04.435 [2024-11-28 10:05:41.574654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.435 [2024-11-28 10:05:41.574664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.574677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.435 [2024-11-28 10:05:41.574684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:04.435 [2024-11-28 10:05:41.574690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.435 [2024-11-28 10:05:41.574696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.636774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.435 [2024-11-28 10:05:41.636809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:04.435 [2024-11-28 10:05:41.636819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.435 [2024-11-28 10:05:41.636830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.435 [2024-11-28 10:05:41.687532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.435 [2024-11-28 10:05:41.687566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:04.435 [2024-11-28 10:05:41.687576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.687659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.436 [2024-11-28 10:05:41.687667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:04.436 [2024-11-28 10:05:41.687674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.687717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.436 [2024-11-28 10:05:41.687726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:04.436 [2024-11-28 10:05:41.687733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.687812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.436 [2024-11-28 10:05:41.687820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:04.436 [2024-11-28 10:05:41.687828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.687859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.436 [2024-11-28 10:05:41.687869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:04.436 [2024-11-28 10:05:41.687875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.687917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.436 [2024-11-28 10:05:41.687924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:04.436 [2024-11-28 10:05:41.687930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.687977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.436 [2024-11-28 10:05:41.687986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:04.436 [2024-11-28 10:05:41.687992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.436 [2024-11-28 10:05:41.687998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.436 [2024-11-28 10:05:41.688103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8501.496 ms, result 0 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85153 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85153 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85153 ']' 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.822 10:05:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:05.822 [2024-11-28 10:05:44.607123] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:05.822 [2024-11-28 10:05:44.607413] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85153 ] 00:33:06.083 [2024-11-28 10:05:44.757376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.083 [2024-11-28 10:05:44.850718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.655 [2024-11-28 10:05:45.483941] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:06.655 [2024-11-28 10:05:45.484001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:06.916 [2024-11-28 10:05:45.632747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.916 [2024-11-28 10:05:45.632783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:06.916 [2024-11-28 10:05:45.632794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:06.917 [2024-11-28 10:05:45.632801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.632851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.632861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:06.917 [2024-11-28 10:05:45.632868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:06.917 [2024-11-28 10:05:45.632873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.632889] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:06.917 [2024-11-28 10:05:45.633441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:06.917 [2024-11-28 10:05:45.633460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.633467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:06.917 [2024-11-28 10:05:45.633473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.575 ms 00:33:06.917 [2024-11-28 10:05:45.633480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.634774] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:06.917 [2024-11-28 10:05:45.645368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.645400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:06.917 [2024-11-28 10:05:45.645409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.595 ms 00:33:06.917 [2024-11-28 10:05:45.645415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.645462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.645470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:06.917 [2024-11-28 10:05:45.645477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:06.917 [2024-11-28 10:05:45.645483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.651587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.651611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:06.917 [2024-11-28 10:05:45.651618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.060 ms 00:33:06.917 [2024-11-28 10:05:45.651624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.651671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.651679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:06.917 [2024-11-28 10:05:45.651686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:33:06.917 [2024-11-28 10:05:45.651692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.651738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.651746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:06.917 [2024-11-28 10:05:45.651753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:06.917 [2024-11-28 10:05:45.651760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.651777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:06.917 [2024-11-28 10:05:45.654693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.654718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:06.917 [2024-11-28 10:05:45.654725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.920 ms 00:33:06.917 [2024-11-28 10:05:45.654731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.654754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.654760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:06.917 [2024-11-28 10:05:45.654766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:06.917 [2024-11-28 10:05:45.654772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.654790] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:06.917 [2024-11-28 10:05:45.654807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:06.917 [2024-11-28 10:05:45.654836] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:06.917 [2024-11-28 10:05:45.654848] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:06.917 [2024-11-28 10:05:45.654933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:06.917 [2024-11-28 10:05:45.654942] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:06.917 [2024-11-28 10:05:45.654951] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:06.917 [2024-11-28 10:05:45.654961] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:06.917 [2024-11-28 10:05:45.654967] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:06.917 [2024-11-28 10:05:45.654973] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:06.917 [2024-11-28 10:05:45.654979] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:06.917 [2024-11-28 10:05:45.654985] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:06.917 [2024-11-28 10:05:45.654990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:06.917 [2024-11-28 10:05:45.654997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.655003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:06.917 [2024-11-28 10:05:45.655009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:33:06.917 [2024-11-28 10:05:45.655014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.655078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.917 [2024-11-28 10:05:45.655087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:06.917 [2024-11-28 10:05:45.655094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:33:06.917 [2024-11-28 10:05:45.655099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.917 [2024-11-28 10:05:45.655188] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:06.917 [2024-11-28 10:05:45.655196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:06.917 [2024-11-28 10:05:45.655203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:06.917 [2024-11-28 10:05:45.655209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:06.917 [2024-11-28 10:05:45.655221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:06.917 [2024-11-28 10:05:45.655231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:06.917 [2024-11-28 10:05:45.655241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:06.917 [2024-11-28 10:05:45.655246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:06.917 [2024-11-28 10:05:45.655256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:06.917 [2024-11-28 10:05:45.655261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:06.917 [2024-11-28 10:05:45.655271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:06.917 [2024-11-28 10:05:45.655276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:06.917 [2024-11-28 10:05:45.655286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:06.917 [2024-11-28 10:05:45.655291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:06.917 [2024-11-28 10:05:45.655301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:06.917 [2024-11-28 10:05:45.655306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:06.917 [2024-11-28 10:05:45.655311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:06.917 [2024-11-28 10:05:45.655322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:06.917 [2024-11-28 10:05:45.655326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:06.917 [2024-11-28 10:05:45.655331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:06.917 [2024-11-28 10:05:45.655337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:06.917 [2024-11-28 10:05:45.655341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:06.917 [2024-11-28 10:05:45.655346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:06.917 [2024-11-28 10:05:45.655351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:06.917 [2024-11-28 10:05:45.655356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:06.917 [2024-11-28 10:05:45.655361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:06.917 [2024-11-28 10:05:45.655366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:06.917 [2024-11-28 10:05:45.655371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:06.917 [2024-11-28 10:05:45.655381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:06.917 [2024-11-28 10:05:45.655387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:06.917 [2024-11-28 10:05:45.655397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.917 [2024-11-28 10:05:45.655409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:06.917 [2024-11-28 10:05:45.655415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:06.918 [2024-11-28 10:05:45.655420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.918 [2024-11-28 10:05:45.655424] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:06.918 [2024-11-28 10:05:45.655430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:06.918 [2024-11-28 10:05:45.655438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:06.918 [2024-11-28 10:05:45.655444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:06.918 [2024-11-28 10:05:45.655449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:06.918 [2024-11-28 10:05:45.655455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:06.918 [2024-11-28 10:05:45.655460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:06.918 [2024-11-28 10:05:45.655465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:06.918 [2024-11-28 10:05:45.655469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:06.918 [2024-11-28 10:05:45.655474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:06.918 [2024-11-28 10:05:45.655481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:06.918 [2024-11-28 10:05:45.655488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:06.918 [2024-11-28 10:05:45.655500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:06.918 [2024-11-28 10:05:45.655516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:06.918 [2024-11-28 10:05:45.655521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:06.918 [2024-11-28 10:05:45.655526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:06.918 [2024-11-28 10:05:45.655531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:06.918 [2024-11-28 10:05:45.655567] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:06.918 [2024-11-28 10:05:45.655573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:06.918 [2024-11-28 10:05:45.655587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:06.918 [2024-11-28 10:05:45.655593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:06.918 [2024-11-28 10:05:45.655598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:06.918 [2024-11-28 10:05:45.655604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:06.918 [2024-11-28 10:05:45.655609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:06.918 [2024-11-28 10:05:45.655615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:33:06.918 [2024-11-28 10:05:45.655620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:06.918 [2024-11-28 10:05:45.655664] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:06.918 [2024-11-28 10:05:45.655673] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:11.128 [2024-11-28 10:05:49.299763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.128 [2024-11-28 10:05:49.299827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:11.128 [2024-11-28 10:05:49.299845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3644.085 ms 00:33:11.128 [2024-11-28 10:05:49.299854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.128 [2024-11-28 10:05:49.336619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.128 [2024-11-28 10:05:49.336677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:11.128 [2024-11-28 10:05:49.336692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.501 ms 00:33:11.128 [2024-11-28 10:05:49.336708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.128 [2024-11-28 10:05:49.336808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.128 [2024-11-28 10:05:49.336819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:11.128 [2024-11-28 10:05:49.336830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:11.128 [2024-11-28 10:05:49.336838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.128 [2024-11-28 10:05:49.376494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.128 [2024-11-28 10:05:49.376552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:11.128 [2024-11-28 10:05:49.376566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.615 ms 00:33:11.128 [2024-11-28 10:05:49.376575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.128 [2024-11-28 10:05:49.376611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.128 [2024-11-28 10:05:49.376620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:11.128 [2024-11-28 10:05:49.376630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:11.128 [2024-11-28 10:05:49.376639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.128 [2024-11-28 10:05:49.377411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.128 [2024-11-28 10:05:49.377447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:11.128 [2024-11-28 10:05:49.377465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:33:11.128 [2024-11-28 10:05:49.377474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.128 [2024-11-28 10:05:49.377531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.377543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:11.129 [2024-11-28 10:05:49.377552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:11.129 [2024-11-28 10:05:49.377560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.398237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.398284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:11.129 [2024-11-28 10:05:49.398297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.654 ms 00:33:11.129 [2024-11-28 10:05:49.398307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.434838] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:11.129 [2024-11-28 10:05:49.434896] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:11.129 [2024-11-28 10:05:49.434914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.434925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:11.129 [2024-11-28 10:05:49.434936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.475 ms 00:33:11.129 [2024-11-28 10:05:49.434944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.450053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.450110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:11.129 [2024-11-28 10:05:49.450124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.047 ms 00:33:11.129 [2024-11-28 10:05:49.450136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.462607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.462670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:11.129 [2024-11-28 10:05:49.462683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.384 ms 00:33:11.129 [2024-11-28 10:05:49.462691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.475009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.475054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:11.129 [2024-11-28 10:05:49.475065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.265 ms 00:33:11.129 [2024-11-28 10:05:49.475074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.475762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.475792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:11.129 [2024-11-28 10:05:49.475804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:33:11.129 [2024-11-28 10:05:49.475813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.548336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.548389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:11.129 [2024-11-28 10:05:49.548402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.499 ms 00:33:11.129 [2024-11-28 10:05:49.548411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.561372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:11.129 [2024-11-28 10:05:49.562675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.562717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:11.129 [2024-11-28 10:05:49.562730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.206 ms 00:33:11.129 [2024-11-28 10:05:49.562739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.562833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.562847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:11.129 [2024-11-28 10:05:49.562857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:11.129 [2024-11-28 10:05:49.562867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.562930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.562944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:11.129 [2024-11-28 10:05:49.562954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:11.129 [2024-11-28 10:05:49.562962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.562988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.563000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:11.129 [2024-11-28 10:05:49.563010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:11.129 [2024-11-28 10:05:49.563019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.563063] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:11.129 [2024-11-28 10:05:49.563077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.563086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:11.129 [2024-11-28 10:05:49.563095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:11.129 [2024-11-28 10:05:49.563104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.589639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.589689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:11.129 [2024-11-28 10:05:49.589702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.513 ms 00:33:11.129 [2024-11-28 10:05:49.589712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.589813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.589826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:11.129 [2024-11-28 10:05:49.589836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:33:11.129 [2024-11-28 10:05:49.589844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.591549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3958.173 ms, result 0 00:33:11.129 [2024-11-28 10:05:49.606079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.129 [2024-11-28 10:05:49.622100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:11.129 [2024-11-28 10:05:49.630324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:11.129 10:05:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.129 10:05:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:11.129 10:05:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:11.129 10:05:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:11.129 10:05:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:11.129 [2024-11-28 10:05:49.874317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.874372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:11.129 [2024-11-28 10:05:49.874386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:11.129 [2024-11-28 10:05:49.874395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.874421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.874431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:11.129 [2024-11-28 10:05:49.874441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:11.129 [2024-11-28 10:05:49.874449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.874471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.129 [2024-11-28 10:05:49.874479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:11.129 [2024-11-28 10:05:49.874488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:11.129 [2024-11-28 10:05:49.874499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.129 [2024-11-28 10:05:49.874566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:33:11.129 true 00:33:11.129 10:05:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:11.392 { 00:33:11.392 "name": "ftl", 00:33:11.392 "properties": [ 00:33:11.392 { 00:33:11.392 "name": "superblock_version", 00:33:11.392 "value": 5, 00:33:11.392 "read-only": true 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "name": "base_device", 00:33:11.392 "bands": [ 00:33:11.392 { 00:33:11.392 "id": 0, 00:33:11.392 "state": "CLOSED", 00:33:11.392 "validity": 1.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 1, 00:33:11.392 "state": "CLOSED", 00:33:11.392 "validity": 1.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 2, 00:33:11.392 "state": "CLOSED", 00:33:11.392 "validity": 0.007843137254901933 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 3, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 4, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 5, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 6, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 7, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 8, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 9, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 10, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 11, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 12, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 13, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 14, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 15, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 16, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 17, 00:33:11.392 "state": "FREE", 00:33:11.392 "validity": 0.0 00:33:11.392 } 00:33:11.392 ], 00:33:11.392 "read-only": true 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "name": "cache_device", 00:33:11.392 "type": "bdev", 00:33:11.392 "chunks": [ 00:33:11.392 { 00:33:11.392 "id": 0, 00:33:11.392 "state": "INACTIVE", 00:33:11.392 "utilization": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 1, 00:33:11.392 "state": "OPEN", 00:33:11.392 "utilization": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 2, 00:33:11.392 "state": "OPEN", 00:33:11.392 "utilization": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 3, 00:33:11.392 "state": "FREE", 00:33:11.392 "utilization": 0.0 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "id": 4, 00:33:11.392 "state": "FREE", 00:33:11.392 "utilization": 0.0 00:33:11.392 } 00:33:11.392 ], 00:33:11.392 "read-only": true 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "name": "verbose_mode", 00:33:11.392 "value": true, 00:33:11.392 "unit": "", 00:33:11.392 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:11.392 }, 00:33:11.392 { 00:33:11.392 "name": "prep_upgrade_on_shutdown", 00:33:11.392 "value": false, 00:33:11.392 "unit": "", 00:33:11.392 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:11.392 } 00:33:11.392 ] 00:33:11.392 } 00:33:11.392 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:11.392 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:11.392 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:11.654 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:11.654 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:11.654 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:11.654 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:11.654 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:11.916 Validate MD5 checksum, iteration 1 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:11.916 10:05:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:11.916 [2024-11-28 10:05:50.611420] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:11.916 [2024-11-28 10:05:50.611539] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85235 ] 00:33:11.916 [2024-11-28 10:05:50.772539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.177 [2024-11-28 10:05:50.894051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.566  [2024-11-28T10:05:53.393Z] Copying: 579/1024 [MB] (579 MBps) [2024-11-28T10:05:54.830Z] Copying: 1024/1024 [MB] (average 536 MBps) 00:33:15.950 00:33:15.950 10:05:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:15.950 10:05:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:17.861 Validate MD5 checksum, iteration 2 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=68de6b9a4e308b77a92cb99a3849fc0e 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 68de6b9a4e308b77a92cb99a3849fc0e != \6\8\d\e\6\b\9\a\4\e\3\0\8\b\7\7\a\9\2\c\b\9\9\a\3\8\4\9\f\c\0\e ]] 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:17.861 10:05:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:17.861 [2024-11-28 10:05:56.727861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:17.861 [2024-11-28 10:05:56.727967] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85302 ] 00:33:18.122 [2024-11-28 10:05:56.888566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.122 [2024-11-28 10:05:56.983130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.039  [2024-11-28T10:05:59.488Z] Copying: 527/1024 [MB] (527 MBps) [2024-11-28T10:06:04.775Z] Copying: 1024/1024 [MB] (average 553 MBps) 00:33:25.895 00:33:25.895 10:06:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:25.895 10:06:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=de870a5a7096685d53f224c342da02a7 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ de870a5a7096685d53f224c342da02a7 != \d\e\8\7\0\a\5\a\7\0\9\6\6\8\5\d\5\3\f\2\2\4\c\3\4\2\d\a\0\2\a\7 ]] 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85153 ]] 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85153 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85398 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85398 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85398 ']' 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:27.277 10:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:27.277 [2024-11-28 10:06:06.011590] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:27.277 [2024-11-28 10:06:06.011710] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85398 ] 00:33:27.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85153 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:27.537 [2024-11-28 10:06:06.168452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.537 [2024-11-28 10:06:06.259685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.108 [2024-11-28 10:06:06.890698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:28.108 [2024-11-28 10:06:06.890754] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:28.370 [2024-11-28 10:06:07.039401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.039432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:28.370 [2024-11-28 10:06:07.039444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.370 [2024-11-28 10:06:07.039451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.039499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.039508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:28.370 [2024-11-28 10:06:07.039514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:28.370 [2024-11-28 10:06:07.039520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.039535] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:28.370 [2024-11-28 10:06:07.040093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:28.370 [2024-11-28 10:06:07.040112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.040119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:28.370 [2024-11-28 10:06:07.040125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:33:28.370 [2024-11-28 10:06:07.040132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.040366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:28.370 [2024-11-28 10:06:07.054283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.054308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:28.370 [2024-11-28 10:06:07.054319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.917 ms 00:33:28.370 [2024-11-28 10:06:07.054326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.061147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.061178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:28.370 [2024-11-28 10:06:07.061185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:33:28.370 [2024-11-28 10:06:07.061191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.061441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.061450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:28.370 [2024-11-28 10:06:07.061457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:33:28.370 [2024-11-28 10:06:07.061463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.061505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.061513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:28.370 [2024-11-28 10:06:07.061519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:33:28.370 [2024-11-28 10:06:07.061525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.061545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.061552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:28.370 [2024-11-28 10:06:07.061558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:28.370 [2024-11-28 10:06:07.061563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.061580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:28.370 [2024-11-28 10:06:07.063832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.063852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:28.370 [2024-11-28 10:06:07.063860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.256 ms 00:33:28.370 [2024-11-28 10:06:07.063868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.063887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.370 [2024-11-28 10:06:07.063894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:28.370 [2024-11-28 10:06:07.063900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.370 [2024-11-28 10:06:07.063906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.370 [2024-11-28 10:06:07.063923] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:28.370 [2024-11-28 10:06:07.063940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:28.370 [2024-11-28 10:06:07.063967] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:28.370 [2024-11-28 10:06:07.063981] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:28.370 [2024-11-28 10:06:07.064065] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:28.370 [2024-11-28 10:06:07.064073] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:28.370 [2024-11-28 10:06:07.064081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:28.370 [2024-11-28 10:06:07.064089] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:28.370 [2024-11-28 10:06:07.064096] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:28.370 [2024-11-28 10:06:07.064102] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:28.370 [2024-11-28 10:06:07.064108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:28.371 [2024-11-28 10:06:07.064114] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:28.371 [2024-11-28 10:06:07.064119] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:28.371 [2024-11-28 10:06:07.064127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.371 [2024-11-28 10:06:07.064133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:28.371 [2024-11-28 10:06:07.064139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:33:28.371 [2024-11-28 10:06:07.064144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.371 [2024-11-28 10:06:07.064218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.371 [2024-11-28 10:06:07.064225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:28.371 [2024-11-28 10:06:07.064231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:33:28.371 [2024-11-28 10:06:07.064237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.371 [2024-11-28 10:06:07.064313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:28.371 [2024-11-28 10:06:07.064324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:28.371 [2024-11-28 10:06:07.064331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:28.371 [2024-11-28 10:06:07.064354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:28.371 [2024-11-28 10:06:07.064365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:28.371 [2024-11-28 10:06:07.064370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:28.371 [2024-11-28 10:06:07.064376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:28.371 [2024-11-28 10:06:07.064385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:28.371 [2024-11-28 10:06:07.064390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:28.371 [2024-11-28 10:06:07.064401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:28.371 [2024-11-28 10:06:07.064406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:28.371 [2024-11-28 10:06:07.064416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:28.371 [2024-11-28 10:06:07.064420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:28.371 [2024-11-28 10:06:07.064431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:28.371 [2024-11-28 10:06:07.064441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:28.371 [2024-11-28 10:06:07.064451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:28.371 [2024-11-28 10:06:07.064456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:28.371 [2024-11-28 10:06:07.064466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:28.371 [2024-11-28 10:06:07.064471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:28.371 [2024-11-28 10:06:07.064481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:28.371 [2024-11-28 10:06:07.064487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:28.371 [2024-11-28 10:06:07.064497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:28.371 [2024-11-28 10:06:07.064502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:28.371 [2024-11-28 10:06:07.064512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:28.371 [2024-11-28 10:06:07.064535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:28.371 [2024-11-28 10:06:07.064549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:28.371 [2024-11-28 10:06:07.064554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064559] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:28.371 [2024-11-28 10:06:07.064565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:28.371 [2024-11-28 10:06:07.064570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:28.371 [2024-11-28 10:06:07.064581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:28.371 [2024-11-28 10:06:07.064587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:28.371 [2024-11-28 10:06:07.064592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:28.371 [2024-11-28 10:06:07.064597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:28.371 [2024-11-28 10:06:07.064602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:28.371 [2024-11-28 10:06:07.064607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:28.371 [2024-11-28 10:06:07.064613] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:28.371 [2024-11-28 10:06:07.064620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:28.371 [2024-11-28 10:06:07.064631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:28.371 [2024-11-28 10:06:07.064648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:28.371 [2024-11-28 10:06:07.064653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:28.371 [2024-11-28 10:06:07.064658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:28.371 [2024-11-28 10:06:07.064663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:28.371 [2024-11-28 10:06:07.064705] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:28.371 [2024-11-28 10:06:07.064712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:28.371 [2024-11-28 10:06:07.064726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:28.371 [2024-11-28 10:06:07.064731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:28.371 [2024-11-28 10:06:07.064736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:28.371 [2024-11-28 10:06:07.064742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.371 [2024-11-28 10:06:07.064747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:28.371 [2024-11-28 10:06:07.064753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.481 ms 00:33:28.371 [2024-11-28 10:06:07.064758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.371 [2024-11-28 10:06:07.086201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.371 [2024-11-28 10:06:07.086242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:28.371 [2024-11-28 10:06:07.086250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.404 ms 00:33:28.371 [2024-11-28 10:06:07.086258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.371 [2024-11-28 10:06:07.086289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.371 [2024-11-28 10:06:07.086296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:28.371 [2024-11-28 10:06:07.086303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:28.371 [2024-11-28 10:06:07.086309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.371 [2024-11-28 10:06:07.112518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.371 [2024-11-28 10:06:07.112547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:28.371 [2024-11-28 10:06:07.112556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.168 ms 00:33:28.372 [2024-11-28 10:06:07.112562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.112584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.112590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:28.372 [2024-11-28 10:06:07.112597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:28.372 [2024-11-28 10:06:07.112605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.112677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.112685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:28.372 [2024-11-28 10:06:07.112693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:28.372 [2024-11-28 10:06:07.112700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.112733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.112741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:28.372 [2024-11-28 10:06:07.112748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:28.372 [2024-11-28 10:06:07.112754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.126064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.126091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:28.372 [2024-11-28 10:06:07.126100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.287 ms 00:33:28.372 [2024-11-28 10:06:07.126108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.126199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.126216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:28.372 [2024-11-28 10:06:07.126224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.372 [2024-11-28 10:06:07.126230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.158645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.158826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:28.372 [2024-11-28 10:06:07.158841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.399 ms 00:33:28.372 [2024-11-28 10:06:07.158848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.166000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.166101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:28.372 [2024-11-28 10:06:07.166114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:33:28.372 [2024-11-28 10:06:07.166120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.213539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.213578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:28.372 [2024-11-28 10:06:07.213589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.359 ms 00:33:28.372 [2024-11-28 10:06:07.213596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.213721] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:28.372 [2024-11-28 10:06:07.213823] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:28.372 [2024-11-28 10:06:07.213923] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:28.372 [2024-11-28 10:06:07.214023] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:28.372 [2024-11-28 10:06:07.214030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.214036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:28.372 [2024-11-28 10:06:07.214043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:33:28.372 [2024-11-28 10:06:07.214050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.214095] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:28.372 [2024-11-28 10:06:07.214106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.214115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:28.372 [2024-11-28 10:06:07.214122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:28.372 [2024-11-28 10:06:07.214128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.226879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.226909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:28.372 [2024-11-28 10:06:07.226918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.733 ms 00:33:28.372 [2024-11-28 10:06:07.226925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.233224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.233248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:28.372 [2024-11-28 10:06:07.233256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:28.372 [2024-11-28 10:06:07.233263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.372 [2024-11-28 10:06:07.233330] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:28.372 [2024-11-28 10:06:07.233488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.372 [2024-11-28 10:06:07.233498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:28.372 [2024-11-28 10:06:07.233505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:33:28.372 [2024-11-28 10:06:07.233511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.944 [2024-11-28 10:06:07.775300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.944 [2024-11-28 10:06:07.775344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:28.944 [2024-11-28 10:06:07.775356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 541.128 ms 00:33:28.944 [2024-11-28 10:06:07.775363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.944 [2024-11-28 10:06:07.778633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.944 [2024-11-28 10:06:07.778661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:28.944 [2024-11-28 10:06:07.778670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.103 ms 00:33:28.944 [2024-11-28 10:06:07.778680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.944 [2024-11-28 10:06:07.779177] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:28.944 [2024-11-28 10:06:07.779196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.944 [2024-11-28 10:06:07.779203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:28.944 [2024-11-28 10:06:07.779210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:33:28.944 [2024-11-28 10:06:07.779216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.944 [2024-11-28 10:06:07.779242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.945 [2024-11-28 10:06:07.779250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:28.945 [2024-11-28 10:06:07.779257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.945 [2024-11-28 10:06:07.779267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.945 [2024-11-28 10:06:07.779294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 545.964 ms, result 0 00:33:28.945 [2024-11-28 10:06:07.779324] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:28.945 [2024-11-28 10:06:07.779485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.945 [2024-11-28 10:06:07.779494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:28.945 [2024-11-28 10:06:07.779500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:33:28.945 [2024-11-28 10:06:07.779505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.888 [2024-11-28 10:06:08.455295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.888 [2024-11-28 10:06:08.455339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:29.888 [2024-11-28 10:06:08.455362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 675.071 ms 00:33:29.888 [2024-11-28 10:06:08.455371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.888 [2024-11-28 10:06:08.459115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.888 [2024-11-28 10:06:08.459148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:29.888 [2024-11-28 10:06:08.459169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.922 ms 00:33:29.888 [2024-11-28 10:06:08.459177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.888 [2024-11-28 10:06:08.459512] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:29.888 [2024-11-28 10:06:08.459539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.888 [2024-11-28 10:06:08.459548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:29.888 [2024-11-28 10:06:08.459557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:33:29.888 [2024-11-28 10:06:08.459564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.888 [2024-11-28 10:06:08.459591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.888 [2024-11-28 10:06:08.459599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:29.888 [2024-11-28 10:06:08.459607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:29.888 [2024-11-28 10:06:08.459614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.459648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 680.315 ms, result 0 00:33:29.889 [2024-11-28 10:06:08.459695] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:29.889 [2024-11-28 10:06:08.459706] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:29.889 [2024-11-28 10:06:08.459716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.459724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:29.889 [2024-11-28 10:06:08.459733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1226.402 ms 00:33:29.889 [2024-11-28 10:06:08.459740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.459768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.459787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:29.889 [2024-11-28 10:06:08.459795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:29.889 [2024-11-28 10:06:08.459802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.470949] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:29.889 [2024-11-28 10:06:08.471052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.471064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:29.889 [2024-11-28 10:06:08.471073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.235 ms 00:33:29.889 [2024-11-28 10:06:08.471081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.471769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.471941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:29.889 [2024-11-28 10:06:08.471956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:33:29.889 [2024-11-28 10:06:08.471964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.474196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.474227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:29.889 [2024-11-28 10:06:08.474237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.211 ms 00:33:29.889 [2024-11-28 10:06:08.474246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.474283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.474292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:29.889 [2024-11-28 10:06:08.474304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:29.889 [2024-11-28 10:06:08.474312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.474415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.474424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:29.889 [2024-11-28 10:06:08.474432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:29.889 [2024-11-28 10:06:08.474440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.474460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.474468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:29.889 [2024-11-28 10:06:08.474476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:29.889 [2024-11-28 10:06:08.474484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.474514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:29.889 [2024-11-28 10:06:08.474523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.474531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:29.889 [2024-11-28 10:06:08.474539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:29.889 [2024-11-28 10:06:08.474546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.474598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.889 [2024-11-28 10:06:08.474607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:29.889 [2024-11-28 10:06:08.474615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:29.889 [2024-11-28 10:06:08.474622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.889 [2024-11-28 10:06:08.475722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1435.725 ms, result 0 00:33:29.889 [2024-11-28 10:06:08.487376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.889 [2024-11-28 10:06:08.503376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:29.889 [2024-11-28 10:06:08.511852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:29.889 Validate MD5 checksum, iteration 1 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:29.889 10:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:29.889 [2024-11-28 10:06:08.587464] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:29.889 [2024-11-28 10:06:08.587638] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85434 ] 00:33:29.889 [2024-11-28 10:06:08.738591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.151 [2024-11-28 10:06:08.860597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.542  [2024-11-28T10:06:11.360Z] Copying: 556/1024 [MB] (556 MBps) [2024-11-28T10:06:12.325Z] Copying: 1024/1024 [MB] (average 580 MBps) 00:33:33.445 00:33:33.445 10:06:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:33.445 10:06:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:35.358 Validate MD5 checksum, iteration 2 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=68de6b9a4e308b77a92cb99a3849fc0e 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 68de6b9a4e308b77a92cb99a3849fc0e != \6\8\d\e\6\b\9\a\4\e\3\0\8\b\7\7\a\9\2\c\b\9\9\a\3\8\4\9\f\c\0\e ]] 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:35.358 10:06:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:35.358 [2024-11-28 10:06:14.171086] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:35.358 [2024-11-28 10:06:14.171296] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85494 ] 00:33:35.617 [2024-11-28 10:06:14.320779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.617 [2024-11-28 10:06:14.396371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.076  [2024-11-28T10:06:16.236Z] Copying: 807/1024 [MB] (807 MBps) [2024-11-28T10:06:17.620Z] Copying: 1024/1024 [MB] (average 801 MBps) 00:33:38.740 00:33:38.740 10:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:38.740 10:06:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=de870a5a7096685d53f224c342da02a7 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ de870a5a7096685d53f224c342da02a7 != \d\e\8\7\0\a\5\a\7\0\9\6\6\8\5\d\5\3\f\2\2\4\c\3\4\2\d\a\0\2\a\7 ]] 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85398 ]] 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85398 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85398 ']' 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85398 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85398 00:33:41.283 killing process with pid 85398 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85398' 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85398 00:33:41.283 10:06:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85398 00:33:41.543 [2024-11-28 10:06:20.250354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:41.543 [2024-11-28 10:06:20.261513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.543 [2024-11-28 10:06:20.261549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:41.543 [2024-11-28 10:06:20.261561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:41.543 [2024-11-28 10:06:20.261567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.543 [2024-11-28 10:06:20.261585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:41.543 [2024-11-28 10:06:20.263892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.543 [2024-11-28 10:06:20.263919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:41.543 [2024-11-28 10:06:20.263932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.295 ms 00:33:41.543 [2024-11-28 10:06:20.263938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.543 [2024-11-28 10:06:20.264129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.543 [2024-11-28 10:06:20.264138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:41.543 [2024-11-28 10:06:20.264145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:33:41.543 [2024-11-28 10:06:20.264161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.543 [2024-11-28 10:06:20.265437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.543 [2024-11-28 10:06:20.265459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:41.543 [2024-11-28 10:06:20.265468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.264 ms 00:33:41.543 [2024-11-28 10:06:20.265478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.543 [2024-11-28 10:06:20.266371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.543 [2024-11-28 10:06:20.266392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:41.543 [2024-11-28 10:06:20.266399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.867 ms 00:33:41.543 [2024-11-28 10:06:20.266406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.543 [2024-11-28 10:06:20.274716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.543 [2024-11-28 10:06:20.274743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:41.543 [2024-11-28 10:06:20.274756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.281 ms 00:33:41.544 [2024-11-28 10:06:20.274763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.279235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.279260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:41.544 [2024-11-28 10:06:20.279269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.442 ms 00:33:41.544 [2024-11-28 10:06:20.279277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.279336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.279344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:41.544 [2024-11-28 10:06:20.279352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:33:41.544 [2024-11-28 10:06:20.279362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.287201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.287226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:41.544 [2024-11-28 10:06:20.287234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.826 ms 00:33:41.544 [2024-11-28 10:06:20.287240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.294755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.294921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:41.544 [2024-11-28 10:06:20.294933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.489 ms 00:33:41.544 [2024-11-28 10:06:20.294940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.302428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.302451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:41.544 [2024-11-28 10:06:20.302459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.463 ms 00:33:41.544 [2024-11-28 10:06:20.302465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.310119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.310240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:41.544 [2024-11-28 10:06:20.310253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.607 ms 00:33:41.544 [2024-11-28 10:06:20.310259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.310284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:41.544 [2024-11-28 10:06:20.310296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:41.544 [2024-11-28 10:06:20.310304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:41.544 [2024-11-28 10:06:20.310310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:41.544 [2024-11-28 10:06:20.310317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:41.544 [2024-11-28 10:06:20.310409] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:41.544 [2024-11-28 10:06:20.310415] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8ad00f45-ab03-4a63-8071-370546f137fd 00:33:41.544 [2024-11-28 10:06:20.310421] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:41.544 [2024-11-28 10:06:20.310427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:41.544 [2024-11-28 10:06:20.310432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:41.544 [2024-11-28 10:06:20.310438] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:41.544 [2024-11-28 10:06:20.310444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:41.544 [2024-11-28 10:06:20.310449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:41.544 [2024-11-28 10:06:20.310459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:41.544 [2024-11-28 10:06:20.310465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:41.544 [2024-11-28 10:06:20.310475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:41.544 [2024-11-28 10:06:20.310482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.310489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:41.544 [2024-11-28 10:06:20.310496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:33:41.544 [2024-11-28 10:06:20.310503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.320766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.320790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:41.544 [2024-11-28 10:06:20.320799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.251 ms 00:33:41.544 [2024-11-28 10:06:20.320806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.321099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:41.544 [2024-11-28 10:06:20.321107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:41.544 [2024-11-28 10:06:20.321114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:33:41.544 [2024-11-28 10:06:20.321120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.356281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.544 [2024-11-28 10:06:20.356306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:41.544 [2024-11-28 10:06:20.356315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.544 [2024-11-28 10:06:20.356326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.356352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.544 [2024-11-28 10:06:20.356359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:41.544 [2024-11-28 10:06:20.356366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.544 [2024-11-28 10:06:20.356372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.356444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.544 [2024-11-28 10:06:20.356454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:41.544 [2024-11-28 10:06:20.356461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.544 [2024-11-28 10:06:20.356467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.356484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.544 [2024-11-28 10:06:20.356490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:41.544 [2024-11-28 10:06:20.356496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.544 [2024-11-28 10:06:20.356502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.544 [2024-11-28 10:06:20.420302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.544 [2024-11-28 10:06:20.420334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:41.544 [2024-11-28 10:06:20.420343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.544 [2024-11-28 10:06:20.420350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.805 [2024-11-28 10:06:20.472436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.805 [2024-11-28 10:06:20.472470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:41.805 [2024-11-28 10:06:20.472479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.805 [2024-11-28 10:06:20.472486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.805 [2024-11-28 10:06:20.472553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.805 [2024-11-28 10:06:20.472562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:41.805 [2024-11-28 10:06:20.472569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.805 [2024-11-28 10:06:20.472576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.805 [2024-11-28 10:06:20.472625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.805 [2024-11-28 10:06:20.472644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:41.805 [2024-11-28 10:06:20.472652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.805 [2024-11-28 10:06:20.472658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.805 [2024-11-28 10:06:20.472736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.805 [2024-11-28 10:06:20.472744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:41.805 [2024-11-28 10:06:20.472751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.805 [2024-11-28 10:06:20.472757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.805 [2024-11-28 10:06:20.472784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.805 [2024-11-28 10:06:20.472792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:41.805 [2024-11-28 10:06:20.472800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.806 [2024-11-28 10:06:20.472806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.806 [2024-11-28 10:06:20.472840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.806 [2024-11-28 10:06:20.472847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:41.806 [2024-11-28 10:06:20.472854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.806 [2024-11-28 10:06:20.472860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.806 [2024-11-28 10:06:20.472898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:41.806 [2024-11-28 10:06:20.472908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:41.806 [2024-11-28 10:06:20.472915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:41.806 [2024-11-28 10:06:20.472921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:41.806 [2024-11-28 10:06:20.473029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 211.486 ms, result 0 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:42.375 Remove shared memory files 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85153 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:42.375 ************************************ 00:33:42.375 END TEST ftl_upgrade_shutdown 00:33:42.375 ************************************ 00:33:42.375 00:33:42.375 real 1m21.111s 00:33:42.375 user 1m49.609s 00:33:42.375 sys 0m19.905s 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.375 10:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:42.375 10:06:21 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:33:42.375 10:06:21 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:33:42.375 10:06:21 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:33:42.375 10:06:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.375 10:06:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:42.375 ************************************ 00:33:42.375 START TEST ftl_restore_fast 00:33:42.375 ************************************ 00:33:42.375 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:33:42.635 * Looking for test storage... 00:33:42.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.635 --rc genhtml_branch_coverage=1 00:33:42.635 --rc genhtml_function_coverage=1 00:33:42.635 --rc genhtml_legend=1 00:33:42.635 --rc geninfo_all_blocks=1 00:33:42.635 --rc geninfo_unexecuted_blocks=1 00:33:42.635 00:33:42.635 ' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.635 --rc genhtml_branch_coverage=1 00:33:42.635 --rc genhtml_function_coverage=1 00:33:42.635 --rc genhtml_legend=1 00:33:42.635 --rc geninfo_all_blocks=1 00:33:42.635 --rc geninfo_unexecuted_blocks=1 00:33:42.635 00:33:42.635 ' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.635 --rc genhtml_branch_coverage=1 00:33:42.635 --rc genhtml_function_coverage=1 00:33:42.635 --rc genhtml_legend=1 00:33:42.635 --rc geninfo_all_blocks=1 00:33:42.635 --rc geninfo_unexecuted_blocks=1 00:33:42.635 00:33:42.635 ' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.635 --rc genhtml_branch_coverage=1 00:33:42.635 --rc genhtml_function_coverage=1 00:33:42.635 --rc genhtml_legend=1 00:33:42.635 --rc geninfo_all_blocks=1 00:33:42.635 --rc geninfo_unexecuted_blocks=1 00:33:42.635 00:33:42.635 ' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:33:42.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.LE64SKiDu2 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:33:42.635 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=85647 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 85647 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 85647 ']' 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.636 10:06:21 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:42.636 [2024-11-28 10:06:21.464578] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:33:42.636 [2024-11-28 10:06:21.464699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85647 ] 00:33:42.896 [2024-11-28 10:06:21.618632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.896 [2024-11-28 10:06:21.712102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:33:43.468 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:33:43.730 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:43.990 { 00:33:43.990 "name": "nvme0n1", 00:33:43.990 "aliases": [ 00:33:43.990 "50367888-33c8-4294-9dcf-2c7ef04fcf5e" 00:33:43.990 ], 00:33:43.990 "product_name": "NVMe disk", 00:33:43.990 "block_size": 4096, 00:33:43.990 "num_blocks": 1310720, 00:33:43.990 "uuid": "50367888-33c8-4294-9dcf-2c7ef04fcf5e", 00:33:43.990 "numa_id": -1, 00:33:43.990 "assigned_rate_limits": { 00:33:43.990 "rw_ios_per_sec": 0, 00:33:43.990 "rw_mbytes_per_sec": 0, 00:33:43.990 "r_mbytes_per_sec": 0, 00:33:43.990 "w_mbytes_per_sec": 0 00:33:43.990 }, 00:33:43.990 "claimed": true, 00:33:43.990 "claim_type": "read_many_write_one", 00:33:43.990 "zoned": false, 00:33:43.990 "supported_io_types": { 00:33:43.990 "read": true, 00:33:43.990 "write": true, 00:33:43.990 "unmap": true, 00:33:43.990 "flush": true, 00:33:43.990 "reset": true, 00:33:43.990 "nvme_admin": true, 00:33:43.990 "nvme_io": true, 00:33:43.990 "nvme_io_md": false, 00:33:43.990 "write_zeroes": true, 00:33:43.990 "zcopy": false, 00:33:43.990 "get_zone_info": false, 00:33:43.990 "zone_management": false, 00:33:43.990 "zone_append": false, 00:33:43.990 "compare": true, 00:33:43.990 "compare_and_write": false, 00:33:43.990 "abort": true, 00:33:43.990 "seek_hole": false, 00:33:43.990 "seek_data": false, 00:33:43.990 "copy": true, 00:33:43.990 "nvme_iov_md": false 00:33:43.990 }, 00:33:43.990 "driver_specific": { 00:33:43.990 "nvme": [ 00:33:43.990 { 00:33:43.990 "pci_address": "0000:00:11.0", 00:33:43.990 "trid": { 00:33:43.990 "trtype": "PCIe", 00:33:43.990 "traddr": "0000:00:11.0" 00:33:43.990 }, 00:33:43.990 "ctrlr_data": { 00:33:43.990 "cntlid": 0, 00:33:43.990 "vendor_id": "0x1b36", 00:33:43.990 "model_number": "QEMU NVMe Ctrl", 00:33:43.990 "serial_number": "12341", 00:33:43.990 "firmware_revision": "8.0.0", 00:33:43.990 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:43.990 "oacs": { 00:33:43.990 "security": 0, 00:33:43.990 "format": 1, 00:33:43.990 "firmware": 0, 00:33:43.990 "ns_manage": 1 00:33:43.990 }, 00:33:43.990 "multi_ctrlr": false, 00:33:43.990 "ana_reporting": false 00:33:43.990 }, 00:33:43.990 "vs": { 00:33:43.990 "nvme_version": "1.4" 00:33:43.990 }, 00:33:43.990 "ns_data": { 00:33:43.990 "id": 1, 00:33:43.990 "can_share": false 00:33:43.990 } 00:33:43.990 } 00:33:43.990 ], 00:33:43.990 "mp_policy": "active_passive" 00:33:43.990 } 00:33:43.990 } 00:33:43.990 ]' 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:33:43.990 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:43.991 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:33:43.991 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:43.991 10:06:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:44.252 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=ca474e77-d0c4-4234-9027-cbaae4a00cbd 00:33:44.252 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:33:44.252 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ca474e77-d0c4-4234-9027-cbaae4a00cbd 00:33:44.513 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=33e08e52-5d32-4e9f-b411-635878c9c093 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 33e08e52-5d32-4e9f-b411-635878c9c093 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:33:44.775 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:45.037 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:45.037 { 00:33:45.037 "name": "fe6563ca-f566-4b4e-89cc-04b4c3082d60", 00:33:45.037 "aliases": [ 00:33:45.037 "lvs/nvme0n1p0" 00:33:45.037 ], 00:33:45.037 "product_name": "Logical Volume", 00:33:45.037 "block_size": 4096, 00:33:45.037 "num_blocks": 26476544, 00:33:45.037 "uuid": "fe6563ca-f566-4b4e-89cc-04b4c3082d60", 00:33:45.037 "assigned_rate_limits": { 00:33:45.037 "rw_ios_per_sec": 0, 00:33:45.037 "rw_mbytes_per_sec": 0, 00:33:45.037 "r_mbytes_per_sec": 0, 00:33:45.037 "w_mbytes_per_sec": 0 00:33:45.037 }, 00:33:45.037 "claimed": false, 00:33:45.037 "zoned": false, 00:33:45.037 "supported_io_types": { 00:33:45.037 "read": true, 00:33:45.037 "write": true, 00:33:45.037 "unmap": true, 00:33:45.037 "flush": false, 00:33:45.037 "reset": true, 00:33:45.037 "nvme_admin": false, 00:33:45.037 "nvme_io": false, 00:33:45.037 "nvme_io_md": false, 00:33:45.037 "write_zeroes": true, 00:33:45.037 "zcopy": false, 00:33:45.037 "get_zone_info": false, 00:33:45.037 "zone_management": false, 00:33:45.037 "zone_append": false, 00:33:45.037 "compare": false, 00:33:45.037 "compare_and_write": false, 00:33:45.037 "abort": false, 00:33:45.037 "seek_hole": true, 00:33:45.037 "seek_data": true, 00:33:45.037 "copy": false, 00:33:45.037 "nvme_iov_md": false 00:33:45.037 }, 00:33:45.037 "driver_specific": { 00:33:45.037 "lvol": { 00:33:45.037 "lvol_store_uuid": "33e08e52-5d32-4e9f-b411-635878c9c093", 00:33:45.037 "base_bdev": "nvme0n1", 00:33:45.037 "thin_provision": true, 00:33:45.037 "num_allocated_clusters": 0, 00:33:45.037 "snapshot": false, 00:33:45.037 "clone": false, 00:33:45.037 "esnap_clone": false 00:33:45.037 } 00:33:45.037 } 00:33:45.037 } 00:33:45.037 ]' 00:33:45.037 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:45.037 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:33:45.037 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:45.299 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:45.299 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:45.299 10:06:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:33:45.299 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:33:45.299 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:33:45.299 10:06:23 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:33:45.299 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:33:45.560 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:45.560 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:45.560 { 00:33:45.560 "name": "fe6563ca-f566-4b4e-89cc-04b4c3082d60", 00:33:45.560 "aliases": [ 00:33:45.560 "lvs/nvme0n1p0" 00:33:45.560 ], 00:33:45.560 "product_name": "Logical Volume", 00:33:45.560 "block_size": 4096, 00:33:45.560 "num_blocks": 26476544, 00:33:45.560 "uuid": "fe6563ca-f566-4b4e-89cc-04b4c3082d60", 00:33:45.560 "assigned_rate_limits": { 00:33:45.560 "rw_ios_per_sec": 0, 00:33:45.560 "rw_mbytes_per_sec": 0, 00:33:45.560 "r_mbytes_per_sec": 0, 00:33:45.560 "w_mbytes_per_sec": 0 00:33:45.560 }, 00:33:45.560 "claimed": false, 00:33:45.560 "zoned": false, 00:33:45.560 "supported_io_types": { 00:33:45.560 "read": true, 00:33:45.560 "write": true, 00:33:45.560 "unmap": true, 00:33:45.560 "flush": false, 00:33:45.560 "reset": true, 00:33:45.560 "nvme_admin": false, 00:33:45.560 "nvme_io": false, 00:33:45.560 "nvme_io_md": false, 00:33:45.560 "write_zeroes": true, 00:33:45.560 "zcopy": false, 00:33:45.560 "get_zone_info": false, 00:33:45.560 "zone_management": false, 00:33:45.560 "zone_append": false, 00:33:45.560 "compare": false, 00:33:45.560 "compare_and_write": false, 00:33:45.560 "abort": false, 00:33:45.560 "seek_hole": true, 00:33:45.560 "seek_data": true, 00:33:45.560 "copy": false, 00:33:45.560 "nvme_iov_md": false 00:33:45.560 }, 00:33:45.560 "driver_specific": { 00:33:45.560 "lvol": { 00:33:45.560 "lvol_store_uuid": "33e08e52-5d32-4e9f-b411-635878c9c093", 00:33:45.560 "base_bdev": "nvme0n1", 00:33:45.560 "thin_provision": true, 00:33:45.560 "num_allocated_clusters": 0, 00:33:45.560 "snapshot": false, 00:33:45.560 "clone": false, 00:33:45.560 "esnap_clone": false 00:33:45.560 } 00:33:45.560 } 00:33:45.560 } 00:33:45.560 ]' 00:33:45.560 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:45.560 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:33:45.560 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:33:45.821 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe6563ca-f566-4b4e-89cc-04b4c3082d60 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:46.082 { 00:33:46.082 "name": "fe6563ca-f566-4b4e-89cc-04b4c3082d60", 00:33:46.082 "aliases": [ 00:33:46.082 "lvs/nvme0n1p0" 00:33:46.082 ], 00:33:46.082 "product_name": "Logical Volume", 00:33:46.082 "block_size": 4096, 00:33:46.082 "num_blocks": 26476544, 00:33:46.082 "uuid": "fe6563ca-f566-4b4e-89cc-04b4c3082d60", 00:33:46.082 "assigned_rate_limits": { 00:33:46.082 "rw_ios_per_sec": 0, 00:33:46.082 "rw_mbytes_per_sec": 0, 00:33:46.082 "r_mbytes_per_sec": 0, 00:33:46.082 "w_mbytes_per_sec": 0 00:33:46.082 }, 00:33:46.082 "claimed": false, 00:33:46.082 "zoned": false, 00:33:46.082 "supported_io_types": { 00:33:46.082 "read": true, 00:33:46.082 "write": true, 00:33:46.082 "unmap": true, 00:33:46.082 "flush": false, 00:33:46.082 "reset": true, 00:33:46.082 "nvme_admin": false, 00:33:46.082 "nvme_io": false, 00:33:46.082 "nvme_io_md": false, 00:33:46.082 "write_zeroes": true, 00:33:46.082 "zcopy": false, 00:33:46.082 "get_zone_info": false, 00:33:46.082 "zone_management": false, 00:33:46.082 "zone_append": false, 00:33:46.082 "compare": false, 00:33:46.082 "compare_and_write": false, 00:33:46.082 "abort": false, 00:33:46.082 "seek_hole": true, 00:33:46.082 "seek_data": true, 00:33:46.082 "copy": false, 00:33:46.082 "nvme_iov_md": false 00:33:46.082 }, 00:33:46.082 "driver_specific": { 00:33:46.082 "lvol": { 00:33:46.082 "lvol_store_uuid": "33e08e52-5d32-4e9f-b411-635878c9c093", 00:33:46.082 "base_bdev": "nvme0n1", 00:33:46.082 "thin_provision": true, 00:33:46.082 "num_allocated_clusters": 0, 00:33:46.082 "snapshot": false, 00:33:46.082 "clone": false, 00:33:46.082 "esnap_clone": false 00:33:46.082 } 00:33:46.082 } 00:33:46.082 } 00:33:46.082 ]' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fe6563ca-f566-4b4e-89cc-04b4c3082d60 --l2p_dram_limit 10' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:33:46.082 10:06:24 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fe6563ca-f566-4b4e-89cc-04b4c3082d60 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:33:46.344 [2024-11-28 10:06:25.090994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.091037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:46.344 [2024-11-28 10:06:25.091052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:46.344 [2024-11-28 10:06:25.091059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.091106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.091114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:46.344 [2024-11-28 10:06:25.091123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:46.344 [2024-11-28 10:06:25.091129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.091146] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:46.344 [2024-11-28 10:06:25.091704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:46.344 [2024-11-28 10:06:25.091726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.091733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:46.344 [2024-11-28 10:06:25.091742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:33:46.344 [2024-11-28 10:06:25.091748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.091774] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 98d603d3-a2ee-4c3b-94b8-6abc305400e3 00:33:46.344 [2024-11-28 10:06:25.093049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.093078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:46.344 [2024-11-28 10:06:25.093087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:46.344 [2024-11-28 10:06:25.093098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.100023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.100050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:46.344 [2024-11-28 10:06:25.100058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:33:46.344 [2024-11-28 10:06:25.100066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.100175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.100186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:46.344 [2024-11-28 10:06:25.100192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:46.344 [2024-11-28 10:06:25.100204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.100242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.100252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:46.344 [2024-11-28 10:06:25.100261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:46.344 [2024-11-28 10:06:25.100269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.100285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:46.344 [2024-11-28 10:06:25.103538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.103563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:46.344 [2024-11-28 10:06:25.103574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:33:46.344 [2024-11-28 10:06:25.103580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.103608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.103615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:46.344 [2024-11-28 10:06:25.103624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:46.344 [2024-11-28 10:06:25.103630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.344 [2024-11-28 10:06:25.103644] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:46.344 [2024-11-28 10:06:25.103754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:46.344 [2024-11-28 10:06:25.103774] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:46.344 [2024-11-28 10:06:25.103784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:46.344 [2024-11-28 10:06:25.103794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:46.344 [2024-11-28 10:06:25.103802] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:46.344 [2024-11-28 10:06:25.103810] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:46.344 [2024-11-28 10:06:25.103818] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:46.344 [2024-11-28 10:06:25.103826] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:46.344 [2024-11-28 10:06:25.103832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:46.344 [2024-11-28 10:06:25.103840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.344 [2024-11-28 10:06:25.103851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:46.345 [2024-11-28 10:06:25.103861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:33:46.345 [2024-11-28 10:06:25.103867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.345 [2024-11-28 10:06:25.103933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.345 [2024-11-28 10:06:25.103940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:46.345 [2024-11-28 10:06:25.103948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:46.345 [2024-11-28 10:06:25.103954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.345 [2024-11-28 10:06:25.104035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:46.345 [2024-11-28 10:06:25.104045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:46.345 [2024-11-28 10:06:25.104053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:46.345 [2024-11-28 10:06:25.104074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:46.345 [2024-11-28 10:06:25.104093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:46.345 [2024-11-28 10:06:25.104106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:46.345 [2024-11-28 10:06:25.104111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:46.345 [2024-11-28 10:06:25.104119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:46.345 [2024-11-28 10:06:25.104124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:46.345 [2024-11-28 10:06:25.104131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:46.345 [2024-11-28 10:06:25.104136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:46.345 [2024-11-28 10:06:25.104162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:46.345 [2024-11-28 10:06:25.104187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:46.345 [2024-11-28 10:06:25.104206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:46.345 [2024-11-28 10:06:25.104225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:46.345 [2024-11-28 10:06:25.104243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:46.345 [2024-11-28 10:06:25.104263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:46.345 [2024-11-28 10:06:25.104275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:46.345 [2024-11-28 10:06:25.104280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:46.345 [2024-11-28 10:06:25.104287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:46.345 [2024-11-28 10:06:25.104292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:46.345 [2024-11-28 10:06:25.104299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:46.345 [2024-11-28 10:06:25.104304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:46.345 [2024-11-28 10:06:25.104316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:46.345 [2024-11-28 10:06:25.104323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104328] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:46.345 [2024-11-28 10:06:25.104337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:46.345 [2024-11-28 10:06:25.104343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:46.345 [2024-11-28 10:06:25.104356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:46.345 [2024-11-28 10:06:25.104365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:46.345 [2024-11-28 10:06:25.104370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:46.345 [2024-11-28 10:06:25.104379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:46.345 [2024-11-28 10:06:25.104385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:46.345 [2024-11-28 10:06:25.104392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:46.345 [2024-11-28 10:06:25.104401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:46.345 [2024-11-28 10:06:25.104412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:46.345 [2024-11-28 10:06:25.104428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:46.345 [2024-11-28 10:06:25.104434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:46.345 [2024-11-28 10:06:25.104441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:46.345 [2024-11-28 10:06:25.104447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:46.345 [2024-11-28 10:06:25.104454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:46.345 [2024-11-28 10:06:25.104460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:46.345 [2024-11-28 10:06:25.104467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:46.345 [2024-11-28 10:06:25.104473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:46.345 [2024-11-28 10:06:25.104481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:46.345 [2024-11-28 10:06:25.104514] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:46.345 [2024-11-28 10:06:25.104521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:46.345 [2024-11-28 10:06:25.104536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:46.345 [2024-11-28 10:06:25.104542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:46.345 [2024-11-28 10:06:25.104549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:46.345 [2024-11-28 10:06:25.104555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:46.345 [2024-11-28 10:06:25.104563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:46.345 [2024-11-28 10:06:25.104570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:33:46.345 [2024-11-28 10:06:25.104577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:46.345 [2024-11-28 10:06:25.104619] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:46.345 [2024-11-28 10:06:25.104633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:50.556 [2024-11-28 10:06:29.142050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.142091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:50.556 [2024-11-28 10:06:29.142102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4037.418 ms 00:33:50.556 [2024-11-28 10:06:29.142111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.165828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.165866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:50.556 [2024-11-28 10:06:29.165876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.529 ms 00:33:50.556 [2024-11-28 10:06:29.165886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.165972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.165982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:50.556 [2024-11-28 10:06:29.165990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:50.556 [2024-11-28 10:06:29.166003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.192743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.192775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:50.556 [2024-11-28 10:06:29.192785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.711 ms 00:33:50.556 [2024-11-28 10:06:29.192793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.192818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.192826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:50.556 [2024-11-28 10:06:29.192833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:50.556 [2024-11-28 10:06:29.192846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.193271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.193295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:50.556 [2024-11-28 10:06:29.193303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:33:50.556 [2024-11-28 10:06:29.193311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.193392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.193409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:50.556 [2024-11-28 10:06:29.193416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:50.556 [2024-11-28 10:06:29.193426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.206573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.206612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.556 [2024-11-28 10:06:29.206621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.133 ms 00:33:50.556 [2024-11-28 10:06:29.206628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.233629] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:50.556 [2024-11-28 10:06:29.236569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.556 [2024-11-28 10:06:29.236596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:50.556 [2024-11-28 10:06:29.236608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.882 ms 00:33:50.556 [2024-11-28 10:06:29.236615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.556 [2024-11-28 10:06:29.312991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.557 [2024-11-28 10:06:29.313030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:50.557 [2024-11-28 10:06:29.313042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.345 ms 00:33:50.557 [2024-11-28 10:06:29.313049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.557 [2024-11-28 10:06:29.313208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.557 [2024-11-28 10:06:29.313217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:50.557 [2024-11-28 10:06:29.313228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:33:50.557 [2024-11-28 10:06:29.313236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.557 [2024-11-28 10:06:29.331670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.557 [2024-11-28 10:06:29.331697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:50.557 [2024-11-28 10:06:29.331709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.397 ms 00:33:50.557 [2024-11-28 10:06:29.331715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.557 [2024-11-28 10:06:29.349672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.557 [2024-11-28 10:06:29.349698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:50.557 [2024-11-28 10:06:29.349709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.923 ms 00:33:50.557 [2024-11-28 10:06:29.349715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.557 [2024-11-28 10:06:29.350161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.557 [2024-11-28 10:06:29.350188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:50.557 [2024-11-28 10:06:29.350200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:33:50.557 [2024-11-28 10:06:29.350207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.557 [2024-11-28 10:06:29.416236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.557 [2024-11-28 10:06:29.416273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:50.557 [2024-11-28 10:06:29.416285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.987 ms 00:33:50.557 [2024-11-28 10:06:29.416292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.819 [2024-11-28 10:06:29.436350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.819 [2024-11-28 10:06:29.436378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:50.819 [2024-11-28 10:06:29.436388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.001 ms 00:33:50.819 [2024-11-28 10:06:29.436395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.819 [2024-11-28 10:06:29.454627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.819 [2024-11-28 10:06:29.454652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:50.819 [2024-11-28 10:06:29.454662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.199 ms 00:33:50.819 [2024-11-28 10:06:29.454669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.819 [2024-11-28 10:06:29.473465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.819 [2024-11-28 10:06:29.473491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:50.819 [2024-11-28 10:06:29.473501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.765 ms 00:33:50.819 [2024-11-28 10:06:29.473508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.819 [2024-11-28 10:06:29.473543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.819 [2024-11-28 10:06:29.473550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:50.819 [2024-11-28 10:06:29.473560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:50.819 [2024-11-28 10:06:29.473566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.819 [2024-11-28 10:06:29.473631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.819 [2024-11-28 10:06:29.473641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:50.819 [2024-11-28 10:06:29.473651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:50.819 [2024-11-28 10:06:29.473657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.819 [2024-11-28 10:06:29.474555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4383.161 ms, result 0 00:33:50.819 { 00:33:50.819 "name": "ftl0", 00:33:50.819 "uuid": "98d603d3-a2ee-4c3b-94b8-6abc305400e3" 00:33:50.819 } 00:33:50.819 10:06:29 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:33:50.819 10:06:29 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:51.080 10:06:29 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:33:51.080 10:06:29 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:51.080 [2024-11-28 10:06:29.877976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.878015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:51.080 [2024-11-28 10:06:29.878025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:51.080 [2024-11-28 10:06:29.878033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.878051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:51.080 [2024-11-28 10:06:29.880338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.880362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:51.080 [2024-11-28 10:06:29.880372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:33:51.080 [2024-11-28 10:06:29.880379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.880582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.880591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:51.080 [2024-11-28 10:06:29.880600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:33:51.080 [2024-11-28 10:06:29.880605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.883059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.883076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:51.080 [2024-11-28 10:06:29.883086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:33:51.080 [2024-11-28 10:06:29.883093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.887738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.887761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:51.080 [2024-11-28 10:06:29.887771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:33:51.080 [2024-11-28 10:06:29.887778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.905392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.905417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:51.080 [2024-11-28 10:06:29.905427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.562 ms 00:33:51.080 [2024-11-28 10:06:29.905434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.918827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.918855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:51.080 [2024-11-28 10:06:29.918866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.360 ms 00:33:51.080 [2024-11-28 10:06:29.918873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.918987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.918997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:51.080 [2024-11-28 10:06:29.919006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:33:51.080 [2024-11-28 10:06:29.919013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.080 [2024-11-28 10:06:29.937141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.080 [2024-11-28 10:06:29.937178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:51.080 [2024-11-28 10:06:29.937188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.110 ms 00:33:51.080 [2024-11-28 10:06:29.937194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.081 [2024-11-28 10:06:29.955557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.081 [2024-11-28 10:06:29.955584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:51.081 [2024-11-28 10:06:29.955593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.333 ms 00:33:51.081 [2024-11-28 10:06:29.955600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.344 [2024-11-28 10:06:29.973377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.344 [2024-11-28 10:06:29.973403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:51.344 [2024-11-28 10:06:29.973412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.745 ms 00:33:51.344 [2024-11-28 10:06:29.973418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.344 [2024-11-28 10:06:29.990942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.344 [2024-11-28 10:06:29.990967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:51.344 [2024-11-28 10:06:29.990976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.464 ms 00:33:51.344 [2024-11-28 10:06:29.990981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.344 [2024-11-28 10:06:29.991010] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:51.344 [2024-11-28 10:06:29.991022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:51.344 [2024-11-28 10:06:29.991466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:51.345 [2024-11-28 10:06:29.991733] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:51.345 [2024-11-28 10:06:29.991741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98d603d3-a2ee-4c3b-94b8-6abc305400e3 00:33:51.345 [2024-11-28 10:06:29.991749] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:51.345 [2024-11-28 10:06:29.991758] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:51.345 [2024-11-28 10:06:29.991766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:51.345 [2024-11-28 10:06:29.991774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:51.345 [2024-11-28 10:06:29.991780] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:51.345 [2024-11-28 10:06:29.991788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:51.345 [2024-11-28 10:06:29.991795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:51.345 [2024-11-28 10:06:29.991802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:51.345 [2024-11-28 10:06:29.991807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:51.345 [2024-11-28 10:06:29.991815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.345 [2024-11-28 10:06:29.991821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:51.345 [2024-11-28 10:06:29.991829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:33:51.345 [2024-11-28 10:06:29.991836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.001330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.345 [2024-11-28 10:06:30.001355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:51.345 [2024-11-28 10:06:30.001364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.468 ms 00:33:51.345 [2024-11-28 10:06:30.001371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.001635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.345 [2024-11-28 10:06:30.001643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:51.345 [2024-11-28 10:06:30.001653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:33:51.345 [2024-11-28 10:06:30.001658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.037334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.037371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:51.345 [2024-11-28 10:06:30.037383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.037390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.037450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.037457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:51.345 [2024-11-28 10:06:30.037468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.037474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.037550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.037560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:51.345 [2024-11-28 10:06:30.037569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.037577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.037596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.037602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:51.345 [2024-11-28 10:06:30.037611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.037619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.102404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.102444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:51.345 [2024-11-28 10:06:30.102457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.102463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.154203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.154253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:51.345 [2024-11-28 10:06:30.154268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.154275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.154369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.154378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:51.345 [2024-11-28 10:06:30.154387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.154393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.154437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.154445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:51.345 [2024-11-28 10:06:30.154454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.154460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.154545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.154555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:51.345 [2024-11-28 10:06:30.154565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.154571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.154603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.154610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:51.345 [2024-11-28 10:06:30.154619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.345 [2024-11-28 10:06:30.154625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.345 [2024-11-28 10:06:30.154664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.345 [2024-11-28 10:06:30.154672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:51.345 [2024-11-28 10:06:30.154681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.346 [2024-11-28 10:06:30.154687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.346 [2024-11-28 10:06:30.154728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:51.346 [2024-11-28 10:06:30.154736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:51.346 [2024-11-28 10:06:30.154744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:51.346 [2024-11-28 10:06:30.154750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.346 [2024-11-28 10:06:30.154873] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.860 ms, result 0 00:33:51.346 true 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 85647 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85647 ']' 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85647 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85647 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:51.346 killing process with pid 85647 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85647' 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 85647 00:33:51.346 10:06:30 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 85647 00:33:57.938 10:06:35 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:34:01.231 262144+0 records in 00:34:01.232 262144+0 records out 00:34:01.232 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.73714 s, 287 MB/s 00:34:01.232 10:06:39 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:03.147 10:06:41 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:03.147 [2024-11-28 10:06:41.901303] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:34:03.147 [2024-11-28 10:06:41.901394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85878 ] 00:34:03.407 [2024-11-28 10:06:42.054333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.407 [2024-11-28 10:06:42.164379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.668 [2024-11-28 10:06:42.453724] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:03.668 [2024-11-28 10:06:42.453810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:03.930 [2024-11-28 10:06:42.615009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.930 [2024-11-28 10:06:42.615055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:03.930 [2024-11-28 10:06:42.615069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:03.930 [2024-11-28 10:06:42.615078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.930 [2024-11-28 10:06:42.615123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.930 [2024-11-28 10:06:42.615135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:03.930 [2024-11-28 10:06:42.615143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:03.930 [2024-11-28 10:06:42.615162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.930 [2024-11-28 10:06:42.615184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:03.930 [2024-11-28 10:06:42.615835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:03.930 [2024-11-28 10:06:42.615857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.930 [2024-11-28 10:06:42.615864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:03.930 [2024-11-28 10:06:42.615873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:34:03.930 [2024-11-28 10:06:42.615881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.930 [2024-11-28 10:06:42.617308] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:03.931 [2024-11-28 10:06:42.630652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.630685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:03.931 [2024-11-28 10:06:42.630697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.346 ms 00:34:03.931 [2024-11-28 10:06:42.630704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.630762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.630771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:03.931 [2024-11-28 10:06:42.630779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:34:03.931 [2024-11-28 10:06:42.630787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.637485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.637514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:03.931 [2024-11-28 10:06:42.637523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.636 ms 00:34:03.931 [2024-11-28 10:06:42.637536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.637610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.637620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:03.931 [2024-11-28 10:06:42.637627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:03.931 [2024-11-28 10:06:42.637635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.637668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.637678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:03.931 [2024-11-28 10:06:42.637686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:03.931 [2024-11-28 10:06:42.637693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.637717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:03.931 [2024-11-28 10:06:42.641387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.641415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:03.931 [2024-11-28 10:06:42.641427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:34:03.931 [2024-11-28 10:06:42.641434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.641463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.641472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:03.931 [2024-11-28 10:06:42.641480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:03.931 [2024-11-28 10:06:42.641487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.641519] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:03.931 [2024-11-28 10:06:42.641539] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:03.931 [2024-11-28 10:06:42.641576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:03.931 [2024-11-28 10:06:42.641594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:03.931 [2024-11-28 10:06:42.641699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:03.931 [2024-11-28 10:06:42.641717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:03.931 [2024-11-28 10:06:42.641728] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:03.931 [2024-11-28 10:06:42.641738] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:03.931 [2024-11-28 10:06:42.641748] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:03.931 [2024-11-28 10:06:42.641756] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:03.931 [2024-11-28 10:06:42.641764] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:03.931 [2024-11-28 10:06:42.641774] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:03.931 [2024-11-28 10:06:42.641782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:03.931 [2024-11-28 10:06:42.641790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.641797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:03.931 [2024-11-28 10:06:42.641805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:34:03.931 [2024-11-28 10:06:42.641813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.641896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.931 [2024-11-28 10:06:42.641910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:03.931 [2024-11-28 10:06:42.641918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:03.931 [2024-11-28 10:06:42.641926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.931 [2024-11-28 10:06:42.642046] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:03.931 [2024-11-28 10:06:42.642065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:03.931 [2024-11-28 10:06:42.642073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:03.931 [2024-11-28 10:06:42.642098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:03.931 [2024-11-28 10:06:42.642119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:03.931 [2024-11-28 10:06:42.642134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:03.931 [2024-11-28 10:06:42.642141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:03.931 [2024-11-28 10:06:42.642147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:03.931 [2024-11-28 10:06:42.642172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:03.931 [2024-11-28 10:06:42.642179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:03.931 [2024-11-28 10:06:42.642186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:03.931 [2024-11-28 10:06:42.642201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:03.931 [2024-11-28 10:06:42.642225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:03.931 [2024-11-28 10:06:42.642255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:03.931 [2024-11-28 10:06:42.642277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:03.931 [2024-11-28 10:06:42.642297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:03.931 [2024-11-28 10:06:42.642318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:03.931 [2024-11-28 10:06:42.642331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:03.931 [2024-11-28 10:06:42.642338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:03.931 [2024-11-28 10:06:42.642344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:03.931 [2024-11-28 10:06:42.642351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:03.931 [2024-11-28 10:06:42.642358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:03.931 [2024-11-28 10:06:42.642365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:03.931 [2024-11-28 10:06:42.642379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:03.931 [2024-11-28 10:06:42.642385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642392] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:03.931 [2024-11-28 10:06:42.642400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:03.931 [2024-11-28 10:06:42.642407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:03.931 [2024-11-28 10:06:42.642414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:03.931 [2024-11-28 10:06:42.642422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:03.931 [2024-11-28 10:06:42.642428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:03.931 [2024-11-28 10:06:42.642435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:03.931 [2024-11-28 10:06:42.642443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:03.931 [2024-11-28 10:06:42.642450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:03.932 [2024-11-28 10:06:42.642456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:03.932 [2024-11-28 10:06:42.642466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:03.932 [2024-11-28 10:06:42.642475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:03.932 [2024-11-28 10:06:42.642494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:03.932 [2024-11-28 10:06:42.642503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:03.932 [2024-11-28 10:06:42.642510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:03.932 [2024-11-28 10:06:42.642517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:03.932 [2024-11-28 10:06:42.642524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:03.932 [2024-11-28 10:06:42.642532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:03.932 [2024-11-28 10:06:42.642539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:03.932 [2024-11-28 10:06:42.642546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:03.932 [2024-11-28 10:06:42.642553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:03.932 [2024-11-28 10:06:42.642588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:03.932 [2024-11-28 10:06:42.642596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:03.932 [2024-11-28 10:06:42.642611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:03.932 [2024-11-28 10:06:42.642618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:03.932 [2024-11-28 10:06:42.642625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:03.932 [2024-11-28 10:06:42.642632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.642640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:03.932 [2024-11-28 10:06:42.642648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:34:03.932 [2024-11-28 10:06:42.642655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.672402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.672436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:03.932 [2024-11-28 10:06:42.672446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.705 ms 00:34:03.932 [2024-11-28 10:06:42.672457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.672541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.672549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:03.932 [2024-11-28 10:06:42.672557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:03.932 [2024-11-28 10:06:42.672565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.714693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.714736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:03.932 [2024-11-28 10:06:42.714748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.078 ms 00:34:03.932 [2024-11-28 10:06:42.714757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.714801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.714811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:03.932 [2024-11-28 10:06:42.714823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:03.932 [2024-11-28 10:06:42.714830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.715353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.715378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:03.932 [2024-11-28 10:06:42.715388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:34:03.932 [2024-11-28 10:06:42.715396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.715534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.715545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:03.932 [2024-11-28 10:06:42.715559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:34:03.932 [2024-11-28 10:06:42.715566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.730369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.730405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:03.932 [2024-11-28 10:06:42.730415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.784 ms 00:34:03.932 [2024-11-28 10:06:42.730422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.743597] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:34:03.932 [2024-11-28 10:06:42.743636] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:03.932 [2024-11-28 10:06:42.743654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.743662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:03.932 [2024-11-28 10:06:42.743672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.136 ms 00:34:03.932 [2024-11-28 10:06:42.743681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.768847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.768890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:03.932 [2024-11-28 10:06:42.768902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.122 ms 00:34:03.932 [2024-11-28 10:06:42.768911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.781319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.781353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:03.932 [2024-11-28 10:06:42.781363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.362 ms 00:34:03.932 [2024-11-28 10:06:42.781371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.792909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.792943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:03.932 [2024-11-28 10:06:42.792953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.504 ms 00:34:03.932 [2024-11-28 10:06:42.792961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.932 [2024-11-28 10:06:42.793581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.932 [2024-11-28 10:06:42.793607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:03.932 [2024-11-28 10:06:42.793617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:34:03.932 [2024-11-28 10:06:42.793628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.857743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.857795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:04.194 [2024-11-28 10:06:42.857809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.096 ms 00:34:04.194 [2024-11-28 10:06:42.857823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.869307] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:04.194 [2024-11-28 10:06:42.872639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.872679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:04.194 [2024-11-28 10:06:42.872692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.765 ms 00:34:04.194 [2024-11-28 10:06:42.872701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.872786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.872798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:04.194 [2024-11-28 10:06:42.872809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:34:04.194 [2024-11-28 10:06:42.872817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.872899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.872912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:04.194 [2024-11-28 10:06:42.872922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:04.194 [2024-11-28 10:06:42.872931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.872954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.872963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:04.194 [2024-11-28 10:06:42.872973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:04.194 [2024-11-28 10:06:42.872981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.873021] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:04.194 [2024-11-28 10:06:42.873035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.873044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:04.194 [2024-11-28 10:06:42.873054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:04.194 [2024-11-28 10:06:42.873063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.899238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.899289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:04.194 [2024-11-28 10:06:42.899304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.154 ms 00:34:04.194 [2024-11-28 10:06:42.899319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.899414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.194 [2024-11-28 10:06:42.899426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:04.194 [2024-11-28 10:06:42.899437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:04.194 [2024-11-28 10:06:42.899448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.194 [2024-11-28 10:06:42.900923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.336 ms, result 0 00:34:05.136  [2024-11-28T10:06:44.961Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-28T10:06:46.346Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-28T10:06:46.916Z] Copying: 66/1024 [MB] (25 MBps) [2024-11-28T10:06:48.304Z] Copying: 87/1024 [MB] (21 MBps) [2024-11-28T10:06:49.315Z] Copying: 111/1024 [MB] (23 MBps) [2024-11-28T10:06:49.914Z] Copying: 133/1024 [MB] (21 MBps) [2024-11-28T10:06:51.296Z] Copying: 161/1024 [MB] (28 MBps) [2024-11-28T10:06:52.240Z] Copying: 187/1024 [MB] (25 MBps) [2024-11-28T10:06:53.185Z] Copying: 211/1024 [MB] (24 MBps) [2024-11-28T10:06:54.135Z] Copying: 229/1024 [MB] (18 MBps) [2024-11-28T10:06:55.078Z] Copying: 252/1024 [MB] (22 MBps) [2024-11-28T10:06:56.020Z] Copying: 274/1024 [MB] (22 MBps) [2024-11-28T10:06:56.966Z] Copying: 288/1024 [MB] (13 MBps) [2024-11-28T10:06:58.354Z] Copying: 306/1024 [MB] (17 MBps) [2024-11-28T10:06:58.928Z] Copying: 323/1024 [MB] (16 MBps) [2024-11-28T10:07:00.314Z] Copying: 343/1024 [MB] (20 MBps) [2024-11-28T10:07:01.260Z] Copying: 355/1024 [MB] (11 MBps) [2024-11-28T10:07:02.206Z] Copying: 375/1024 [MB] (20 MBps) [2024-11-28T10:07:03.151Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-28T10:07:04.097Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-28T10:07:05.041Z] Copying: 409/1024 [MB] (10 MBps) [2024-11-28T10:07:05.986Z] Copying: 420/1024 [MB] (11 MBps) [2024-11-28T10:07:06.932Z] Copying: 432/1024 [MB] (11 MBps) [2024-11-28T10:07:08.320Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-28T10:07:09.265Z] Copying: 453/1024 [MB] (11 MBps) [2024-11-28T10:07:10.207Z] Copying: 465/1024 [MB] (11 MBps) [2024-11-28T10:07:11.168Z] Copying: 476/1024 [MB] (11 MBps) [2024-11-28T10:07:12.111Z] Copying: 487/1024 [MB] (11 MBps) [2024-11-28T10:07:13.052Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-28T10:07:13.994Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-28T10:07:14.938Z] Copying: 520/1024 [MB] (12 MBps) [2024-11-28T10:07:16.325Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-28T10:07:17.271Z] Copying: 543/1024 [MB] (11 MBps) [2024-11-28T10:07:18.214Z] Copying: 554/1024 [MB] (11 MBps) [2024-11-28T10:07:19.158Z] Copying: 565/1024 [MB] (11 MBps) [2024-11-28T10:07:20.103Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-28T10:07:21.049Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-28T10:07:22.074Z] Copying: 598/1024 [MB] (10 MBps) [2024-11-28T10:07:23.053Z] Copying: 608/1024 [MB] (10 MBps) [2024-11-28T10:07:23.997Z] Copying: 619/1024 [MB] (10 MBps) [2024-11-28T10:07:24.943Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-28T10:07:26.339Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-28T10:07:27.280Z] Copying: 652/1024 [MB] (11 MBps) [2024-11-28T10:07:28.223Z] Copying: 664/1024 [MB] (11 MBps) [2024-11-28T10:07:29.166Z] Copying: 675/1024 [MB] (11 MBps) [2024-11-28T10:07:30.110Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-28T10:07:31.054Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-28T10:07:31.999Z] Copying: 708/1024 [MB] (10 MBps) [2024-11-28T10:07:32.944Z] Copying: 720/1024 [MB] (11 MBps) [2024-11-28T10:07:34.330Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-28T10:07:35.274Z] Copying: 742/1024 [MB] (11 MBps) [2024-11-28T10:07:36.217Z] Copying: 753/1024 [MB] (11 MBps) [2024-11-28T10:07:37.162Z] Copying: 764/1024 [MB] (11 MBps) [2024-11-28T10:07:38.108Z] Copying: 776/1024 [MB] (11 MBps) [2024-11-28T10:07:39.052Z] Copying: 787/1024 [MB] (11 MBps) [2024-11-28T10:07:39.998Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-28T10:07:40.944Z] Copying: 809/1024 [MB] (11 MBps) [2024-11-28T10:07:42.334Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-28T10:07:43.279Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-28T10:07:44.226Z] Copying: 843/1024 [MB] (11 MBps) [2024-11-28T10:07:45.172Z] Copying: 854/1024 [MB] (11 MBps) [2024-11-28T10:07:46.114Z] Copying: 866/1024 [MB] (11 MBps) [2024-11-28T10:07:47.055Z] Copying: 877/1024 [MB] (11 MBps) [2024-11-28T10:07:47.999Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-28T10:07:48.942Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-28T10:07:50.332Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-28T10:07:51.279Z] Copying: 923/1024 [MB] (11 MBps) [2024-11-28T10:07:52.225Z] Copying: 935/1024 [MB] (11 MBps) [2024-11-28T10:07:53.174Z] Copying: 946/1024 [MB] (11 MBps) [2024-11-28T10:07:54.193Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-28T10:07:55.139Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-28T10:07:56.081Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-28T10:07:57.028Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-28T10:07:57.974Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-28T10:07:58.921Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-28T10:07:58.921Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-28 10:07:58.741042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.041 [2024-11-28 10:07:58.741089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:20.041 [2024-11-28 10:07:58.741101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:20.041 [2024-11-28 10:07:58.741109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.041 [2024-11-28 10:07:58.741126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:20.041 [2024-11-28 10:07:58.743395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.041 [2024-11-28 10:07:58.743421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:20.041 [2024-11-28 10:07:58.743434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:35:20.041 [2024-11-28 10:07:58.743441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.041 [2024-11-28 10:07:58.745752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.041 [2024-11-28 10:07:58.745778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:20.041 [2024-11-28 10:07:58.745786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:35:20.041 [2024-11-28 10:07:58.745792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.041 [2024-11-28 10:07:58.745813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.041 [2024-11-28 10:07:58.745821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:20.041 [2024-11-28 10:07:58.745828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:20.041 [2024-11-28 10:07:58.745834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.041 [2024-11-28 10:07:58.745877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.041 [2024-11-28 10:07:58.745885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:20.041 [2024-11-28 10:07:58.745892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:20.041 [2024-11-28 10:07:58.745898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.041 [2024-11-28 10:07:58.745909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:20.041 [2024-11-28 10:07:58.745919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:20.041 [2024-11-28 10:07:58.745998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:20.042 [2024-11-28 10:07:58.746540] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:20.042 [2024-11-28 10:07:58.746546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98d603d3-a2ee-4c3b-94b8-6abc305400e3 00:35:20.042 [2024-11-28 10:07:58.746552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:20.043 [2024-11-28 10:07:58.746558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:35:20.043 [2024-11-28 10:07:58.746563] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:20.043 [2024-11-28 10:07:58.746575] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:20.043 [2024-11-28 10:07:58.746580] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:20.043 [2024-11-28 10:07:58.746586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:20.043 [2024-11-28 10:07:58.746592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:20.043 [2024-11-28 10:07:58.746597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:20.043 [2024-11-28 10:07:58.746602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:20.043 [2024-11-28 10:07:58.746608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.043 [2024-11-28 10:07:58.746614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:20.043 [2024-11-28 10:07:58.746620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:35:20.043 [2024-11-28 10:07:58.746626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.756629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.043 [2024-11-28 10:07:58.756657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:20.043 [2024-11-28 10:07:58.756665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.993 ms 00:35:20.043 [2024-11-28 10:07:58.756671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.756966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.043 [2024-11-28 10:07:58.756978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:20.043 [2024-11-28 10:07:58.756985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:35:20.043 [2024-11-28 10:07:58.756991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.784401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.784427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:20.043 [2024-11-28 10:07:58.784435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.784442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.784488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.784495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:20.043 [2024-11-28 10:07:58.784501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.784507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.784541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.784551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:20.043 [2024-11-28 10:07:58.784558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.784565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.784577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.784583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:20.043 [2024-11-28 10:07:58.784592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.784598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.848046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.848082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:20.043 [2024-11-28 10:07:58.848091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.848097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.899806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.899843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:20.043 [2024-11-28 10:07:58.899852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.899859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.899925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.899933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:20.043 [2024-11-28 10:07:58.899943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.899949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.899978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.899985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:20.043 [2024-11-28 10:07:58.899992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.899999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.900059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.900068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:20.043 [2024-11-28 10:07:58.900081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.900089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.900108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.900115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:20.043 [2024-11-28 10:07:58.900121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.900127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.900170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.900178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:20.043 [2024-11-28 10:07:58.900184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.900193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.900230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:20.043 [2024-11-28 10:07:58.900238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:20.043 [2024-11-28 10:07:58.900245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:20.043 [2024-11-28 10:07:58.900251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.043 [2024-11-28 10:07:58.900359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 159.284 ms, result 0 00:35:20.987 00:35:20.987 00:35:20.987 10:07:59 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:35:20.987 [2024-11-28 10:07:59.694337] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:35:20.987 [2024-11-28 10:07:59.694614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86652 ] 00:35:20.987 [2024-11-28 10:07:59.849401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.248 [2024-11-28 10:07:59.950280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.511 [2024-11-28 10:08:00.183990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:21.511 [2024-11-28 10:08:00.184049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:21.511 [2024-11-28 10:08:00.340077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.511 [2024-11-28 10:08:00.340117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:21.511 [2024-11-28 10:08:00.340129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:21.511 [2024-11-28 10:08:00.340135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.511 [2024-11-28 10:08:00.340184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.511 [2024-11-28 10:08:00.340195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:21.511 [2024-11-28 10:08:00.340202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:21.511 [2024-11-28 10:08:00.340208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.511 [2024-11-28 10:08:00.340222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:21.511 [2024-11-28 10:08:00.340766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:21.511 [2024-11-28 10:08:00.340784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.511 [2024-11-28 10:08:00.340791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:21.511 [2024-11-28 10:08:00.340798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:35:21.511 [2024-11-28 10:08:00.340804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.511 [2024-11-28 10:08:00.341012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:21.511 [2024-11-28 10:08:00.341031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.511 [2024-11-28 10:08:00.341042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:21.511 [2024-11-28 10:08:00.341049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:21.511 [2024-11-28 10:08:00.341054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.511 [2024-11-28 10:08:00.341093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.511 [2024-11-28 10:08:00.341101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:21.511 [2024-11-28 10:08:00.341107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:35:21.511 [2024-11-28 10:08:00.341113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.511 [2024-11-28 10:08:00.341338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.511 [2024-11-28 10:08:00.341353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:21.511 [2024-11-28 10:08:00.341360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:35:21.512 [2024-11-28 10:08:00.341366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.341446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.512 [2024-11-28 10:08:00.341455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:21.512 [2024-11-28 10:08:00.341462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:35:21.512 [2024-11-28 10:08:00.341467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.341486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.512 [2024-11-28 10:08:00.341493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:21.512 [2024-11-28 10:08:00.341502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:21.512 [2024-11-28 10:08:00.341507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.341521] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:21.512 [2024-11-28 10:08:00.344747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.512 [2024-11-28 10:08:00.344771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:21.512 [2024-11-28 10:08:00.344779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:35:21.512 [2024-11-28 10:08:00.344785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.344812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.512 [2024-11-28 10:08:00.344819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:21.512 [2024-11-28 10:08:00.344825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:21.512 [2024-11-28 10:08:00.344831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.344864] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:21.512 [2024-11-28 10:08:00.344883] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:21.512 [2024-11-28 10:08:00.344913] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:21.512 [2024-11-28 10:08:00.344926] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:21.512 [2024-11-28 10:08:00.345012] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:21.512 [2024-11-28 10:08:00.345021] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:21.512 [2024-11-28 10:08:00.345029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:21.512 [2024-11-28 10:08:00.345037] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345044] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345052] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:21.512 [2024-11-28 10:08:00.345058] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:21.512 [2024-11-28 10:08:00.345063] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:21.512 [2024-11-28 10:08:00.345068] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:21.512 [2024-11-28 10:08:00.345074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.512 [2024-11-28 10:08:00.345080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:21.512 [2024-11-28 10:08:00.345085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:35:21.512 [2024-11-28 10:08:00.345090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.345165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.512 [2024-11-28 10:08:00.345173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:21.512 [2024-11-28 10:08:00.345179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:35:21.512 [2024-11-28 10:08:00.345187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.512 [2024-11-28 10:08:00.345263] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:21.512 [2024-11-28 10:08:00.345271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:21.512 [2024-11-28 10:08:00.345278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:21.512 [2024-11-28 10:08:00.345296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:21.512 [2024-11-28 10:08:00.345316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:21.512 [2024-11-28 10:08:00.345327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:21.512 [2024-11-28 10:08:00.345332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:21.512 [2024-11-28 10:08:00.345337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:21.512 [2024-11-28 10:08:00.345343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:21.512 [2024-11-28 10:08:00.345348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:21.512 [2024-11-28 10:08:00.345359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:21.512 [2024-11-28 10:08:00.345369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:21.512 [2024-11-28 10:08:00.345386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:21.512 [2024-11-28 10:08:00.345401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:21.512 [2024-11-28 10:08:00.345418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:21.512 [2024-11-28 10:08:00.345434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:21.512 [2024-11-28 10:08:00.345449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:21.512 [2024-11-28 10:08:00.345459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:21.512 [2024-11-28 10:08:00.345464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:21.512 [2024-11-28 10:08:00.345469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:21.512 [2024-11-28 10:08:00.345474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:21.512 [2024-11-28 10:08:00.345480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:21.512 [2024-11-28 10:08:00.345485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:21.512 [2024-11-28 10:08:00.345497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:21.512 [2024-11-28 10:08:00.345502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345507] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:21.512 [2024-11-28 10:08:00.345513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:21.512 [2024-11-28 10:08:00.345519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.512 [2024-11-28 10:08:00.345532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:21.512 [2024-11-28 10:08:00.345537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:21.512 [2024-11-28 10:08:00.345542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:21.512 [2024-11-28 10:08:00.345548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:21.512 [2024-11-28 10:08:00.345553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:21.512 [2024-11-28 10:08:00.345559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:21.512 [2024-11-28 10:08:00.345566] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:21.512 [2024-11-28 10:08:00.345573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:21.512 [2024-11-28 10:08:00.345580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:21.512 [2024-11-28 10:08:00.345586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:21.512 [2024-11-28 10:08:00.345592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:21.512 [2024-11-28 10:08:00.345597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:21.512 [2024-11-28 10:08:00.345604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:21.512 [2024-11-28 10:08:00.345609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:21.512 [2024-11-28 10:08:00.345616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:21.512 [2024-11-28 10:08:00.345621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:21.513 [2024-11-28 10:08:00.345627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:21.513 [2024-11-28 10:08:00.345633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:21.513 [2024-11-28 10:08:00.345639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:21.513 [2024-11-28 10:08:00.345644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:21.513 [2024-11-28 10:08:00.345650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:21.513 [2024-11-28 10:08:00.345656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:21.513 [2024-11-28 10:08:00.345663] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:21.513 [2024-11-28 10:08:00.345669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:21.513 [2024-11-28 10:08:00.345675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:21.513 [2024-11-28 10:08:00.345682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:21.513 [2024-11-28 10:08:00.345688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:21.513 [2024-11-28 10:08:00.345694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:21.513 [2024-11-28 10:08:00.345700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.513 [2024-11-28 10:08:00.345706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:21.513 [2024-11-28 10:08:00.345712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:35:21.513 [2024-11-28 10:08:00.345718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.513 [2024-11-28 10:08:00.366599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.513 [2024-11-28 10:08:00.366625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:21.513 [2024-11-28 10:08:00.366633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.847 ms 00:35:21.513 [2024-11-28 10:08:00.366639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.513 [2024-11-28 10:08:00.366701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.513 [2024-11-28 10:08:00.366707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:21.513 [2024-11-28 10:08:00.366716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:35:21.513 [2024-11-28 10:08:00.366721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.407477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.407509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:21.773 [2024-11-28 10:08:00.407519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.718 ms 00:35:21.773 [2024-11-28 10:08:00.407525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.407559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.407567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:21.773 [2024-11-28 10:08:00.407574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:21.773 [2024-11-28 10:08:00.407580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.407657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.407666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:21.773 [2024-11-28 10:08:00.407673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:21.773 [2024-11-28 10:08:00.407679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.407777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.407787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:21.773 [2024-11-28 10:08:00.407794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:35:21.773 [2024-11-28 10:08:00.407800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.419579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.419603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:21.773 [2024-11-28 10:08:00.419611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.765 ms 00:35:21.773 [2024-11-28 10:08:00.419617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.419711] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:21.773 [2024-11-28 10:08:00.419722] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:21.773 [2024-11-28 10:08:00.419729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.419739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:21.773 [2024-11-28 10:08:00.419745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:21.773 [2024-11-28 10:08:00.419751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.429023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.429045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:21.773 [2024-11-28 10:08:00.429053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.260 ms 00:35:21.773 [2024-11-28 10:08:00.429060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.429162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.429170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:21.773 [2024-11-28 10:08:00.429177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:35:21.773 [2024-11-28 10:08:00.429186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.429209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.429217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:21.773 [2024-11-28 10:08:00.429230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:21.773 [2024-11-28 10:08:00.429237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.429677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.429691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:21.773 [2024-11-28 10:08:00.429698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:35:21.773 [2024-11-28 10:08:00.429704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.429719] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:21.773 [2024-11-28 10:08:00.429727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.429734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:21.773 [2024-11-28 10:08:00.429740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:21.773 [2024-11-28 10:08:00.429746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.439212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:21.773 [2024-11-28 10:08:00.439319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.439327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:21.773 [2024-11-28 10:08:00.439334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.550 ms 00:35:21.773 [2024-11-28 10:08:00.439341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.440933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.440955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:21.773 [2024-11-28 10:08:00.440962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:35:21.773 [2024-11-28 10:08:00.440968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.441037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.441045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:21.773 [2024-11-28 10:08:00.441052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:21.773 [2024-11-28 10:08:00.441058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.441086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.441096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:21.773 [2024-11-28 10:08:00.441102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:21.773 [2024-11-28 10:08:00.441108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.441132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:21.773 [2024-11-28 10:08:00.441141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.441148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:21.773 [2024-11-28 10:08:00.441166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:21.773 [2024-11-28 10:08:00.441172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.460540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.460566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:21.773 [2024-11-28 10:08:00.460575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.352 ms 00:35:21.773 [2024-11-28 10:08:00.460581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.460641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.773 [2024-11-28 10:08:00.460649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:21.773 [2024-11-28 10:08:00.460656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:21.773 [2024-11-28 10:08:00.460662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.773 [2024-11-28 10:08:00.461523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.083 ms, result 0 00:35:23.160  [2024-11-28T10:08:02.613Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-28T10:08:03.999Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-28T10:08:04.942Z] Copying: 34/1024 [MB] (11 MBps) [2024-11-28T10:08:05.892Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-28T10:08:06.837Z] Copying: 57/1024 [MB] (11 MBps) [2024-11-28T10:08:07.782Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-28T10:08:08.726Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-28T10:08:09.672Z] Copying: 93/1024 [MB] (12 MBps) [2024-11-28T10:08:10.617Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-28T10:08:12.007Z] Copying: 117/1024 [MB] (12 MBps) [2024-11-28T10:08:12.953Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-28T10:08:13.900Z] Copying: 140/1024 [MB] (11 MBps) [2024-11-28T10:08:14.845Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-28T10:08:15.790Z] Copying: 163/1024 [MB] (12 MBps) [2024-11-28T10:08:16.736Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-28T10:08:17.682Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-28T10:08:18.628Z] Copying: 198/1024 [MB] (11 MBps) [2024-11-28T10:08:20.013Z] Copying: 210/1024 [MB] (11 MBps) [2024-11-28T10:08:20.958Z] Copying: 222/1024 [MB] (11 MBps) [2024-11-28T10:08:21.903Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-28T10:08:22.847Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-28T10:08:23.792Z] Copying: 258/1024 [MB] (12 MBps) [2024-11-28T10:08:24.763Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-28T10:08:25.745Z] Copying: 280/1024 [MB] (11 MBps) [2024-11-28T10:08:26.691Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-28T10:08:27.635Z] Copying: 304/1024 [MB] (13 MBps) [2024-11-28T10:08:29.038Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-28T10:08:29.608Z] Copying: 328/1024 [MB] (11 MBps) [2024-11-28T10:08:30.993Z] Copying: 339/1024 [MB] (11 MBps) [2024-11-28T10:08:31.938Z] Copying: 350/1024 [MB] (11 MBps) [2024-11-28T10:08:32.883Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-28T10:08:33.829Z] Copying: 373/1024 [MB] (11 MBps) [2024-11-28T10:08:34.773Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-28T10:08:35.716Z] Copying: 396/1024 [MB] (11 MBps) [2024-11-28T10:08:36.663Z] Copying: 407/1024 [MB] (11 MBps) [2024-11-28T10:08:37.606Z] Copying: 419/1024 [MB] (12 MBps) [2024-11-28T10:08:38.992Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-28T10:08:39.952Z] Copying: 443/1024 [MB] (12 MBps) [2024-11-28T10:08:40.915Z] Copying: 454/1024 [MB] (11 MBps) [2024-11-28T10:08:41.859Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-28T10:08:42.802Z] Copying: 478/1024 [MB] (11 MBps) [2024-11-28T10:08:43.747Z] Copying: 489/1024 [MB] (11 MBps) [2024-11-28T10:08:44.690Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-28T10:08:45.634Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-28T10:08:47.022Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-28T10:08:47.966Z] Copying: 535/1024 [MB] (11 MBps) [2024-11-28T10:08:48.911Z] Copying: 547/1024 [MB] (11 MBps) [2024-11-28T10:08:49.856Z] Copying: 559/1024 [MB] (11 MBps) [2024-11-28T10:08:50.800Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-28T10:08:51.742Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-28T10:08:52.686Z] Copying: 592/1024 [MB] (11 MBps) [2024-11-28T10:08:53.630Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-28T10:08:55.018Z] Copying: 616/1024 [MB] (11 MBps) [2024-11-28T10:08:55.972Z] Copying: 627/1024 [MB] (11 MBps) [2024-11-28T10:08:56.964Z] Copying: 639/1024 [MB] (11 MBps) [2024-11-28T10:08:57.938Z] Copying: 651/1024 [MB] (11 MBps) [2024-11-28T10:08:58.883Z] Copying: 662/1024 [MB] (11 MBps) [2024-11-28T10:08:59.827Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-28T10:09:00.772Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-28T10:09:01.716Z] Copying: 697/1024 [MB] (11 MBps) [2024-11-28T10:09:02.662Z] Copying: 708/1024 [MB] (11 MBps) [2024-11-28T10:09:03.605Z] Copying: 721/1024 [MB] (12 MBps) [2024-11-28T10:09:04.994Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-28T10:09:05.937Z] Copying: 744/1024 [MB] (11 MBps) [2024-11-28T10:09:06.879Z] Copying: 755/1024 [MB] (10 MBps) [2024-11-28T10:09:07.823Z] Copying: 768/1024 [MB] (12 MBps) [2024-11-28T10:09:08.769Z] Copying: 779/1024 [MB] (11 MBps) [2024-11-28T10:09:09.715Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-28T10:09:10.660Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-28T10:09:11.605Z] Copying: 811/1024 [MB] (11 MBps) [2024-11-28T10:09:12.990Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-28T10:09:13.934Z] Copying: 834/1024 [MB] (11 MBps) [2024-11-28T10:09:14.881Z] Copying: 844/1024 [MB] (10 MBps) [2024-11-28T10:09:15.822Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-28T10:09:16.767Z] Copying: 867/1024 [MB] (11 MBps) [2024-11-28T10:09:17.712Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-28T10:09:18.657Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-28T10:09:20.046Z] Copying: 901/1024 [MB] (11 MBps) [2024-11-28T10:09:20.618Z] Copying: 913/1024 [MB] (11 MBps) [2024-11-28T10:09:22.007Z] Copying: 924/1024 [MB] (11 MBps) [2024-11-28T10:09:22.951Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-28T10:09:23.896Z] Copying: 955/1024 [MB] (18 MBps) [2024-11-28T10:09:24.840Z] Copying: 966/1024 [MB] (11 MBps) [2024-11-28T10:09:25.782Z] Copying: 977/1024 [MB] (10 MBps) [2024-11-28T10:09:26.725Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-28T10:09:27.669Z] Copying: 999/1024 [MB] (11 MBps) [2024-11-28T10:09:28.634Z] Copying: 1011/1024 [MB] (11 MBps) [2024-11-28T10:09:28.931Z] Copying: 1022/1024 [MB] (11 MBps) [2024-11-28T10:09:29.202Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-28 10:09:28.942586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.322 [2024-11-28 10:09:28.942647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:50.322 [2024-11-28 10:09:28.942661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:50.322 [2024-11-28 10:09:28.942669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.322 [2024-11-28 10:09:28.942692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:50.322 [2024-11-28 10:09:28.945088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.322 [2024-11-28 10:09:28.945120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:50.322 [2024-11-28 10:09:28.945129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.383 ms 00:36:50.322 [2024-11-28 10:09:28.945136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.322 [2024-11-28 10:09:28.945336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.322 [2024-11-28 10:09:28.945346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:50.322 [2024-11-28 10:09:28.945354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:36:50.322 [2024-11-28 10:09:28.945360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.322 [2024-11-28 10:09:28.945388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.322 [2024-11-28 10:09:28.945396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:50.322 [2024-11-28 10:09:28.945403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:50.322 [2024-11-28 10:09:28.945409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.322 [2024-11-28 10:09:28.945455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.322 [2024-11-28 10:09:28.945464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:50.322 [2024-11-28 10:09:28.945470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:36:50.322 [2024-11-28 10:09:28.945477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.322 [2024-11-28 10:09:28.945488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:50.322 [2024-11-28 10:09:28.945499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:50.322 [2024-11-28 10:09:28.945883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.945998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:50.323 [2024-11-28 10:09:28.946120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:50.323 [2024-11-28 10:09:28.946126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98d603d3-a2ee-4c3b-94b8-6abc305400e3 00:36:50.323 [2024-11-28 10:09:28.946132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:50.323 [2024-11-28 10:09:28.946138] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:36:50.323 [2024-11-28 10:09:28.946145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:50.323 [2024-11-28 10:09:28.946581] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:50.323 [2024-11-28 10:09:28.946593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:50.323 [2024-11-28 10:09:28.946600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:50.323 [2024-11-28 10:09:28.946607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:50.323 [2024-11-28 10:09:28.946612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:50.323 [2024-11-28 10:09:28.946618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:50.323 [2024-11-28 10:09:28.946624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.323 [2024-11-28 10:09:28.946630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:50.323 [2024-11-28 10:09:28.946637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:36:50.323 [2024-11-28 10:09:28.946646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:28.959051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.323 [2024-11-28 10:09:28.959084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:50.323 [2024-11-28 10:09:28.959095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.389 ms 00:36:50.323 [2024-11-28 10:09:28.959104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:28.959480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:50.323 [2024-11-28 10:09:28.959496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:50.323 [2024-11-28 10:09:28.959508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:36:50.323 [2024-11-28 10:09:28.959516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:28.987874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:28.987903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:50.323 [2024-11-28 10:09:28.987912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:28.987918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:28.987966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:28.987974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:50.323 [2024-11-28 10:09:28.987983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:28.987990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:28.988030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:28.988038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:50.323 [2024-11-28 10:09:28.988045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:28.988051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:28.988065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:28.988071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:50.323 [2024-11-28 10:09:28.988077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:28.988085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.051862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.051894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:50.323 [2024-11-28 10:09:29.051903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.051910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.103570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.103606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:50.323 [2024-11-28 10:09:29.103615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.103626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.103693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.103701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:50.323 [2024-11-28 10:09:29.103708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.103716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.103748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.103756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:50.323 [2024-11-28 10:09:29.103763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.103769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.103835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.103844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:50.323 [2024-11-28 10:09:29.103850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.103857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.103877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.103884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:50.323 [2024-11-28 10:09:29.103890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.103897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.323 [2024-11-28 10:09:29.103932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.323 [2024-11-28 10:09:29.103939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:50.323 [2024-11-28 10:09:29.103947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.323 [2024-11-28 10:09:29.103953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.324 [2024-11-28 10:09:29.103989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:50.324 [2024-11-28 10:09:29.103998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:50.324 [2024-11-28 10:09:29.104005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:50.324 [2024-11-28 10:09:29.104011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:50.324 [2024-11-28 10:09:29.104120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 161.510 ms, result 0 00:36:50.896 00:36:50.896 00:36:50.896 10:09:29 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:53.439 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:53.439 10:09:31 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:36:53.439 [2024-11-28 10:09:31.877498] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:36:53.439 [2024-11-28 10:09:31.877620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87569 ] 00:36:53.439 [2024-11-28 10:09:32.036344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.439 [2024-11-28 10:09:32.130830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.701 [2024-11-28 10:09:32.364091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:53.701 [2024-11-28 10:09:32.364161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:53.701 [2024-11-28 10:09:32.519808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.701 [2024-11-28 10:09:32.519847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:53.701 [2024-11-28 10:09:32.519860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:53.701 [2024-11-28 10:09:32.519866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.701 [2024-11-28 10:09:32.519902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.701 [2024-11-28 10:09:32.519911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:53.701 [2024-11-28 10:09:32.519918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:36:53.701 [2024-11-28 10:09:32.519924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.701 [2024-11-28 10:09:32.519937] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:53.701 [2024-11-28 10:09:32.520483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:53.701 [2024-11-28 10:09:32.520502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.701 [2024-11-28 10:09:32.520508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:53.701 [2024-11-28 10:09:32.520515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:36:53.702 [2024-11-28 10:09:32.520521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.520725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:36:53.702 [2024-11-28 10:09:32.520744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.520754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:53.702 [2024-11-28 10:09:32.520761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:36:53.702 [2024-11-28 10:09:32.520767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.520806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.520814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:53.702 [2024-11-28 10:09:32.520821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:36:53.702 [2024-11-28 10:09:32.520826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.521033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.521049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:53.702 [2024-11-28 10:09:32.521057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:36:53.702 [2024-11-28 10:09:32.521067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.521145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.521168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:53.702 [2024-11-28 10:09:32.521176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:36:53.702 [2024-11-28 10:09:32.521182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.521201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.521209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:53.702 [2024-11-28 10:09:32.521218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:53.702 [2024-11-28 10:09:32.521224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.521237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:53.702 [2024-11-28 10:09:32.524579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.524604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:53.702 [2024-11-28 10:09:32.524611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:36:53.702 [2024-11-28 10:09:32.524617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.524645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.524652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:53.702 [2024-11-28 10:09:32.524659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:53.702 [2024-11-28 10:09:32.524665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.524698] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:53.702 [2024-11-28 10:09:32.524716] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:53.702 [2024-11-28 10:09:32.524745] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:53.702 [2024-11-28 10:09:32.524759] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:53.702 [2024-11-28 10:09:32.524840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:53.702 [2024-11-28 10:09:32.524848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:53.702 [2024-11-28 10:09:32.524856] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:53.702 [2024-11-28 10:09:32.524865] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:53.702 [2024-11-28 10:09:32.524871] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:53.702 [2024-11-28 10:09:32.524879] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:53.702 [2024-11-28 10:09:32.524886] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:53.702 [2024-11-28 10:09:32.524892] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:53.702 [2024-11-28 10:09:32.524898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:53.702 [2024-11-28 10:09:32.524904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.524909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:53.702 [2024-11-28 10:09:32.524915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:36:53.702 [2024-11-28 10:09:32.524921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.524983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.702 [2024-11-28 10:09:32.524990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:53.702 [2024-11-28 10:09:32.524996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:36:53.702 [2024-11-28 10:09:32.525003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.702 [2024-11-28 10:09:32.525078] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:53.702 [2024-11-28 10:09:32.525087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:53.702 [2024-11-28 10:09:32.525094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:53.702 [2024-11-28 10:09:32.525113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:53.702 [2024-11-28 10:09:32.525131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:53.702 [2024-11-28 10:09:32.525142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:53.702 [2024-11-28 10:09:32.525147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:53.702 [2024-11-28 10:09:32.525164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:53.702 [2024-11-28 10:09:32.525170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:53.702 [2024-11-28 10:09:32.525175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:53.702 [2024-11-28 10:09:32.525184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:53.702 [2024-11-28 10:09:32.525195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:53.702 [2024-11-28 10:09:32.525212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:53.702 [2024-11-28 10:09:32.525228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:53.702 [2024-11-28 10:09:32.525244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:53.702 [2024-11-28 10:09:32.525260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:53.702 [2024-11-28 10:09:32.525275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:53.702 [2024-11-28 10:09:32.525286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:53.702 [2024-11-28 10:09:32.525292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:53.702 [2024-11-28 10:09:32.525296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:53.702 [2024-11-28 10:09:32.525302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:53.702 [2024-11-28 10:09:32.525307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:53.702 [2024-11-28 10:09:32.525312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:53.702 [2024-11-28 10:09:32.525344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:53.702 [2024-11-28 10:09:32.525349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525354] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:53.702 [2024-11-28 10:09:32.525361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:53.702 [2024-11-28 10:09:32.525366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:53.702 [2024-11-28 10:09:32.525372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:53.702 [2024-11-28 10:09:32.525380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:53.702 [2024-11-28 10:09:32.525386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:53.702 [2024-11-28 10:09:32.525391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:53.702 [2024-11-28 10:09:32.525396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:53.702 [2024-11-28 10:09:32.525401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:53.703 [2024-11-28 10:09:32.525407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:53.703 [2024-11-28 10:09:32.525414] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:53.703 [2024-11-28 10:09:32.525421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:53.703 [2024-11-28 10:09:32.525433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:53.703 [2024-11-28 10:09:32.525439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:53.703 [2024-11-28 10:09:32.525445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:53.703 [2024-11-28 10:09:32.525451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:53.703 [2024-11-28 10:09:32.525456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:53.703 [2024-11-28 10:09:32.525462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:53.703 [2024-11-28 10:09:32.525468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:53.703 [2024-11-28 10:09:32.525473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:53.703 [2024-11-28 10:09:32.525478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:53.703 [2024-11-28 10:09:32.525506] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:53.703 [2024-11-28 10:09:32.525512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:53.703 [2024-11-28 10:09:32.525524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:53.703 [2024-11-28 10:09:32.525530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:53.703 [2024-11-28 10:09:32.525536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:53.703 [2024-11-28 10:09:32.525542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.703 [2024-11-28 10:09:32.525549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:53.703 [2024-11-28 10:09:32.525554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:36:53.703 [2024-11-28 10:09:32.525560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.703 [2024-11-28 10:09:32.546604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.703 [2024-11-28 10:09:32.546631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:53.703 [2024-11-28 10:09:32.546640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.012 ms 00:36:53.703 [2024-11-28 10:09:32.546646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.703 [2024-11-28 10:09:32.546710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.703 [2024-11-28 10:09:32.546717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:53.703 [2024-11-28 10:09:32.546726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:36:53.703 [2024-11-28 10:09:32.546732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.588703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.588735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:53.965 [2024-11-28 10:09:32.588745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.932 ms 00:36:53.965 [2024-11-28 10:09:32.588752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.588785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.588793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:53.965 [2024-11-28 10:09:32.588800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:53.965 [2024-11-28 10:09:32.588806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.588885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.588894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:53.965 [2024-11-28 10:09:32.588901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:36:53.965 [2024-11-28 10:09:32.588907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.589003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.589013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:53.965 [2024-11-28 10:09:32.589019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:36:53.965 [2024-11-28 10:09:32.589025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.600950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.600977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:53.965 [2024-11-28 10:09:32.600986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.911 ms 00:36:53.965 [2024-11-28 10:09:32.600992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.601085] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:53.965 [2024-11-28 10:09:32.601095] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:53.965 [2024-11-28 10:09:32.601103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.601111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:53.965 [2024-11-28 10:09:32.601119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:36:53.965 [2024-11-28 10:09:32.601125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.613894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.613927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:53.965 [2024-11-28 10:09:32.613936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:36:53.965 [2024-11-28 10:09:32.613943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.614040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.614046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:53.965 [2024-11-28 10:09:32.614053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:36:53.965 [2024-11-28 10:09:32.614062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.614111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.614120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:53.965 [2024-11-28 10:09:32.614132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:36:53.965 [2024-11-28 10:09:32.614138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.965 [2024-11-28 10:09:32.614619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.965 [2024-11-28 10:09:32.614635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:53.965 [2024-11-28 10:09:32.614643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:36:53.966 [2024-11-28 10:09:32.614649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.614666] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:36:53.966 [2024-11-28 10:09:32.614674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.614680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:53.966 [2024-11-28 10:09:32.614687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:53.966 [2024-11-28 10:09:32.614692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.624101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:53.966 [2024-11-28 10:09:32.624213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.624222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:53.966 [2024-11-28 10:09:32.624230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.507 ms 00:36:53.966 [2024-11-28 10:09:32.624236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.625787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.625807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:53.966 [2024-11-28 10:09:32.625814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:36:53.966 [2024-11-28 10:09:32.625820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.625887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.625895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:53.966 [2024-11-28 10:09:32.625902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:36:53.966 [2024-11-28 10:09:32.625908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.625926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.625936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:53.966 [2024-11-28 10:09:32.625942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:53.966 [2024-11-28 10:09:32.625949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.625974] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:53.966 [2024-11-28 10:09:32.625983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.625989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:53.966 [2024-11-28 10:09:32.625995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:53.966 [2024-11-28 10:09:32.626001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.645536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.645564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:53.966 [2024-11-28 10:09:32.645573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.520 ms 00:36:53.966 [2024-11-28 10:09:32.645579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.645638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:53.966 [2024-11-28 10:09:32.645646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:53.966 [2024-11-28 10:09:32.645653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:36:53.966 [2024-11-28 10:09:32.645659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:53.966 [2024-11-28 10:09:32.646599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.428 ms, result 0 00:36:54.911  [2024-11-28T10:09:34.738Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-28T10:09:35.684Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-28T10:09:37.072Z] Copying: 57/1024 [MB] (22 MBps) [2024-11-28T10:09:38.016Z] Copying: 80/1024 [MB] (23 MBps) [2024-11-28T10:09:38.961Z] Copying: 103/1024 [MB] (22 MBps) [2024-11-28T10:09:39.905Z] Copying: 121/1024 [MB] (18 MBps) [2024-11-28T10:09:40.848Z] Copying: 140/1024 [MB] (19 MBps) [2024-11-28T10:09:41.793Z] Copying: 158/1024 [MB] (18 MBps) [2024-11-28T10:09:42.738Z] Copying: 178/1024 [MB] (19 MBps) [2024-11-28T10:09:43.683Z] Copying: 197/1024 [MB] (18 MBps) [2024-11-28T10:09:45.068Z] Copying: 215/1024 [MB] (18 MBps) [2024-11-28T10:09:46.021Z] Copying: 230/1024 [MB] (14 MBps) [2024-11-28T10:09:46.965Z] Copying: 242/1024 [MB] (11 MBps) [2024-11-28T10:09:47.910Z] Copying: 253/1024 [MB] (11 MBps) [2024-11-28T10:09:48.851Z] Copying: 264/1024 [MB] (11 MBps) [2024-11-28T10:09:49.816Z] Copying: 276/1024 [MB] (11 MBps) [2024-11-28T10:09:50.760Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-28T10:09:51.704Z] Copying: 299/1024 [MB] (11 MBps) [2024-11-28T10:09:53.091Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-28T10:09:53.663Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-28T10:09:55.049Z] Copying: 333/1024 [MB] (11 MBps) [2024-11-28T10:09:55.992Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-28T10:09:56.936Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-28T10:09:57.880Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-28T10:09:58.825Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-28T10:09:59.768Z] Copying: 389/1024 [MB] (11 MBps) [2024-11-28T10:10:00.783Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-28T10:10:01.735Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-28T10:10:02.676Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-28T10:10:04.059Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-28T10:10:05.004Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-28T10:10:05.947Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-28T10:10:06.894Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-28T10:10:07.838Z] Copying: 477/1024 [MB] (11 MBps) [2024-11-28T10:10:08.782Z] Copying: 488/1024 [MB] (11 MBps) [2024-11-28T10:10:09.727Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-28T10:10:10.682Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-28T10:10:12.069Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-28T10:10:13.014Z] Copying: 533/1024 [MB] (11 MBps) [2024-11-28T10:10:13.958Z] Copying: 545/1024 [MB] (11 MBps) [2024-11-28T10:10:14.904Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-28T10:10:15.846Z] Copying: 567/1024 [MB] (11 MBps) [2024-11-28T10:10:16.789Z] Copying: 578/1024 [MB] (11 MBps) [2024-11-28T10:10:17.732Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-28T10:10:18.678Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-28T10:10:20.065Z] Copying: 612/1024 [MB] (11 MBps) [2024-11-28T10:10:21.009Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-28T10:10:21.953Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-28T10:10:22.896Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-28T10:10:23.840Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-28T10:10:24.784Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-28T10:10:25.725Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-28T10:10:26.669Z] Copying: 690/1024 [MB] (11 MBps) [2024-11-28T10:10:28.055Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-28T10:10:28.999Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-28T10:10:29.944Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-28T10:10:30.889Z] Copying: 737/1024 [MB] (12 MBps) [2024-11-28T10:10:31.834Z] Copying: 750/1024 [MB] (12 MBps) [2024-11-28T10:10:32.853Z] Copying: 778280/1048576 [kB] (10064 kBps) [2024-11-28T10:10:33.803Z] Copying: 771/1024 [MB] (10 MBps) [2024-11-28T10:10:34.749Z] Copying: 784/1024 [MB] (13 MBps) [2024-11-28T10:10:35.706Z] Copying: 795/1024 [MB] (10 MBps) [2024-11-28T10:10:37.092Z] Copying: 807/1024 [MB] (12 MBps) [2024-11-28T10:10:37.663Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-28T10:10:39.053Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-28T10:10:39.998Z] Copying: 859200/1048576 [kB] (10240 kBps) [2024-11-28T10:10:40.942Z] Copying: 869216/1048576 [kB] (10016 kBps) [2024-11-28T10:10:41.884Z] Copying: 862/1024 [MB] (13 MBps) [2024-11-28T10:10:42.829Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-28T10:10:43.774Z] Copying: 885/1024 [MB] (11 MBps) [2024-11-28T10:10:44.720Z] Copying: 896/1024 [MB] (11 MBps) [2024-11-28T10:10:45.663Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-28T10:10:47.053Z] Copying: 919/1024 [MB] (11 MBps) [2024-11-28T10:10:47.998Z] Copying: 930/1024 [MB] (11 MBps) [2024-11-28T10:10:48.944Z] Copying: 942/1024 [MB] (11 MBps) [2024-11-28T10:10:49.889Z] Copying: 958/1024 [MB] (16 MBps) [2024-11-28T10:10:50.834Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-28T10:10:51.779Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-28T10:10:52.723Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-28T10:10:53.670Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-28T10:10:55.059Z] Copying: 1013/1024 [MB] (10 MBps) [2024-11-28T10:10:55.631Z] Copying: 1047800/1048576 [kB] (9508 kBps) [2024-11-28T10:10:55.631Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-28 10:10:55.553505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.751 [2024-11-28 10:10:55.553604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:16.751 [2024-11-28 10:10:55.553622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:16.751 [2024-11-28 10:10:55.553634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.751 [2024-11-28 10:10:55.556534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:16.751 [2024-11-28 10:10:55.561201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.751 [2024-11-28 10:10:55.561246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:16.751 [2024-11-28 10:10:55.561259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:38:16.751 [2024-11-28 10:10:55.561268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.751 [2024-11-28 10:10:55.573057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.751 [2024-11-28 10:10:55.573102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:16.751 [2024-11-28 10:10:55.573115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.466 ms 00:38:16.751 [2024-11-28 10:10:55.573125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.751 [2024-11-28 10:10:55.573165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.751 [2024-11-28 10:10:55.573177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:38:16.751 [2024-11-28 10:10:55.573186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:38:16.751 [2024-11-28 10:10:55.573195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.751 [2024-11-28 10:10:55.573260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.751 [2024-11-28 10:10:55.573273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:38:16.751 [2024-11-28 10:10:55.573282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:38:16.751 [2024-11-28 10:10:55.573291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.751 [2024-11-28 10:10:55.573306] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:16.751 [2024-11-28 10:10:55.573320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:38:16.751 [2024-11-28 10:10:55.573331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:16.751 [2024-11-28 10:10:55.573520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.573987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:16.752 [2024-11-28 10:10:55.574140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:16.752 [2024-11-28 10:10:55.574148] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98d603d3-a2ee-4c3b-94b8-6abc305400e3 00:38:16.752 [2024-11-28 10:10:55.574170] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:38:16.752 [2024-11-28 10:10:55.574179] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:38:16.752 [2024-11-28 10:10:55.574187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:38:16.752 [2024-11-28 10:10:55.574195] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:38:16.752 [2024-11-28 10:10:55.574207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:16.752 [2024-11-28 10:10:55.574215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:16.752 [2024-11-28 10:10:55.574223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:16.752 [2024-11-28 10:10:55.574229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:16.752 [2024-11-28 10:10:55.574235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:16.752 [2024-11-28 10:10:55.574242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.752 [2024-11-28 10:10:55.574249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:16.752 [2024-11-28 10:10:55.574257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:38:16.752 [2024-11-28 10:10:55.574264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.752 [2024-11-28 10:10:55.588915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.752 [2024-11-28 10:10:55.588956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:16.752 [2024-11-28 10:10:55.588976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.634 ms 00:38:16.752 [2024-11-28 10:10:55.588985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.752 [2024-11-28 10:10:55.589436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:16.752 [2024-11-28 10:10:55.589451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:16.752 [2024-11-28 10:10:55.589460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:38:16.752 [2024-11-28 10:10:55.589469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.752 [2024-11-28 10:10:55.628844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:16.752 [2024-11-28 10:10:55.628891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:16.752 [2024-11-28 10:10:55.628903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:16.752 [2024-11-28 10:10:55.628912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.752 [2024-11-28 10:10:55.628986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:16.752 [2024-11-28 10:10:55.628996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:16.752 [2024-11-28 10:10:55.629006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:16.752 [2024-11-28 10:10:55.629016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:16.752 [2024-11-28 10:10:55.629075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:16.752 [2024-11-28 10:10:55.629087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:17.014 [2024-11-28 10:10:55.629103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.629113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.629129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.629138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:17.014 [2024-11-28 10:10:55.629147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.629173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.720243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.720302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:17.014 [2024-11-28 10:10:55.720315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.720323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.794439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.794504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:17.014 [2024-11-28 10:10:55.794519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.794528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.794637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.794650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:17.014 [2024-11-28 10:10:55.794660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.794673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.794720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.794732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:17.014 [2024-11-28 10:10:55.794742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.794750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.794837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.794849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:17.014 [2024-11-28 10:10:55.794859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.794867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.794901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.794911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:17.014 [2024-11-28 10:10:55.794920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.794930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.794978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.794989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:17.014 [2024-11-28 10:10:55.794998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.795007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.795064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:17.014 [2024-11-28 10:10:55.795075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:17.014 [2024-11-28 10:10:55.795084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:17.014 [2024-11-28 10:10:55.795092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:17.014 [2024-11-28 10:10:55.795290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 244.041 ms, result 0 00:38:18.930 00:38:18.930 00:38:18.930 10:10:57 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:38:18.930 [2024-11-28 10:10:57.388121] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:38:18.930 [2024-11-28 10:10:57.388282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88408 ] 00:38:18.930 [2024-11-28 10:10:57.550811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.930 [2024-11-28 10:10:57.700000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.191 [2024-11-28 10:10:58.012263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:19.191 [2024-11-28 10:10:58.012327] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:19.455 [2024-11-28 10:10:58.171367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.171414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:19.455 [2024-11-28 10:10:58.171428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:19.455 [2024-11-28 10:10:58.171436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.171485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.171497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:19.455 [2024-11-28 10:10:58.171506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:38:19.455 [2024-11-28 10:10:58.171514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.171534] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:19.455 [2024-11-28 10:10:58.172206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:19.455 [2024-11-28 10:10:58.172230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.172238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:19.455 [2024-11-28 10:10:58.172246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:38:19.455 [2024-11-28 10:10:58.172254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.172710] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:38:19.455 [2024-11-28 10:10:58.172760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.172775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:19.455 [2024-11-28 10:10:58.172786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:38:19.455 [2024-11-28 10:10:58.172794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.172844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.172854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:19.455 [2024-11-28 10:10:58.172862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:38:19.455 [2024-11-28 10:10:58.172870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.173144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.173182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:19.455 [2024-11-28 10:10:58.173191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:38:19.455 [2024-11-28 10:10:58.173199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.173266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.173276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:19.455 [2024-11-28 10:10:58.173284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:38:19.455 [2024-11-28 10:10:58.173291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.173313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.173323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:19.455 [2024-11-28 10:10:58.173333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:19.455 [2024-11-28 10:10:58.173340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.173358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:19.455 [2024-11-28 10:10:58.177294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.177323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:19.455 [2024-11-28 10:10:58.177333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.942 ms 00:38:19.455 [2024-11-28 10:10:58.177340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.177373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.177381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:19.455 [2024-11-28 10:10:58.177389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:19.455 [2024-11-28 10:10:58.177396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.177446] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:19.455 [2024-11-28 10:10:58.177468] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:19.455 [2024-11-28 10:10:58.177507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:19.455 [2024-11-28 10:10:58.177522] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:19.455 [2024-11-28 10:10:58.177626] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:19.455 [2024-11-28 10:10:58.177637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:19.455 [2024-11-28 10:10:58.177648] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:19.455 [2024-11-28 10:10:58.177659] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:19.455 [2024-11-28 10:10:58.177668] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:19.455 [2024-11-28 10:10:58.177678] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:19.455 [2024-11-28 10:10:58.177686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:19.455 [2024-11-28 10:10:58.177693] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:19.455 [2024-11-28 10:10:58.177700] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:19.455 [2024-11-28 10:10:58.177708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.177715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:19.455 [2024-11-28 10:10:58.177722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:38:19.455 [2024-11-28 10:10:58.177729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.177811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.455 [2024-11-28 10:10:58.177819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:19.455 [2024-11-28 10:10:58.177826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:38:19.455 [2024-11-28 10:10:58.177835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.455 [2024-11-28 10:10:58.177935] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:19.455 [2024-11-28 10:10:58.177946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:19.455 [2024-11-28 10:10:58.177954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:19.455 [2024-11-28 10:10:58.177961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:19.455 [2024-11-28 10:10:58.177969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:19.455 [2024-11-28 10:10:58.177976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:19.455 [2024-11-28 10:10:58.177984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:19.455 [2024-11-28 10:10:58.177991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:19.455 [2024-11-28 10:10:58.177997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:19.455 [2024-11-28 10:10:58.178004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:19.455 [2024-11-28 10:10:58.178014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:19.455 [2024-11-28 10:10:58.178021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:19.455 [2024-11-28 10:10:58.178029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:19.455 [2024-11-28 10:10:58.178036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:19.455 [2024-11-28 10:10:58.178042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:19.455 [2024-11-28 10:10:58.178053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:19.455 [2024-11-28 10:10:58.178060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:19.455 [2024-11-28 10:10:58.178067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:19.455 [2024-11-28 10:10:58.178074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:19.455 [2024-11-28 10:10:58.178081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:19.455 [2024-11-28 10:10:58.178087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:19.455 [2024-11-28 10:10:58.178094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:19.455 [2024-11-28 10:10:58.178100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:19.455 [2024-11-28 10:10:58.178107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:19.455 [2024-11-28 10:10:58.178113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:19.455 [2024-11-28 10:10:58.178120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:19.455 [2024-11-28 10:10:58.178127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:19.455 [2024-11-28 10:10:58.178133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:19.455 [2024-11-28 10:10:58.178139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:19.456 [2024-11-28 10:10:58.178146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:19.456 [2024-11-28 10:10:58.178166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:19.456 [2024-11-28 10:10:58.178174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:19.456 [2024-11-28 10:10:58.178180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:19.456 [2024-11-28 10:10:58.178187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:19.456 [2024-11-28 10:10:58.178193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:19.456 [2024-11-28 10:10:58.178199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:19.456 [2024-11-28 10:10:58.178205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:19.456 [2024-11-28 10:10:58.178212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:19.456 [2024-11-28 10:10:58.178219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:19.456 [2024-11-28 10:10:58.178226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:19.456 [2024-11-28 10:10:58.178232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:19.456 [2024-11-28 10:10:58.178238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:19.456 [2024-11-28 10:10:58.178246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:19.456 [2024-11-28 10:10:58.178253] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:19.456 [2024-11-28 10:10:58.178261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:19.456 [2024-11-28 10:10:58.178269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:19.456 [2024-11-28 10:10:58.178276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:19.456 [2024-11-28 10:10:58.178286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:19.456 [2024-11-28 10:10:58.178293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:19.456 [2024-11-28 10:10:58.178300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:19.456 [2024-11-28 10:10:58.178308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:19.456 [2024-11-28 10:10:58.178315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:19.456 [2024-11-28 10:10:58.178322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:19.456 [2024-11-28 10:10:58.178331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:19.456 [2024-11-28 10:10:58.178340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:19.456 [2024-11-28 10:10:58.178356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:19.456 [2024-11-28 10:10:58.178363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:19.456 [2024-11-28 10:10:58.178370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:19.456 [2024-11-28 10:10:58.178378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:19.456 [2024-11-28 10:10:58.178386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:19.456 [2024-11-28 10:10:58.178416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:19.456 [2024-11-28 10:10:58.178426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:19.456 [2024-11-28 10:10:58.178433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:19.456 [2024-11-28 10:10:58.178441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:19.456 [2024-11-28 10:10:58.178478] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:19.456 [2024-11-28 10:10:58.178486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:19.456 [2024-11-28 10:10:58.178501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:19.456 [2024-11-28 10:10:58.178508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:19.456 [2024-11-28 10:10:58.178516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:19.456 [2024-11-28 10:10:58.178524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.178531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:19.456 [2024-11-28 10:10:58.178540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:38:19.456 [2024-11-28 10:10:58.178549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.204486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.204517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:19.456 [2024-11-28 10:10:58.204527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.897 ms 00:38:19.456 [2024-11-28 10:10:58.204535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.204615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.204623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:19.456 [2024-11-28 10:10:58.204634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:38:19.456 [2024-11-28 10:10:58.204642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.247015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.247063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:19.456 [2024-11-28 10:10:58.247075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.326 ms 00:38:19.456 [2024-11-28 10:10:58.247088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.247132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.247142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:19.456 [2024-11-28 10:10:58.247150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:19.456 [2024-11-28 10:10:58.247171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.247265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.247277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:19.456 [2024-11-28 10:10:58.247287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:38:19.456 [2024-11-28 10:10:58.247296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.247413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.247425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:19.456 [2024-11-28 10:10:58.247433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:38:19.456 [2024-11-28 10:10:58.247441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.262139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.262181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:19.456 [2024-11-28 10:10:58.262192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.680 ms 00:38:19.456 [2024-11-28 10:10:58.262200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.262312] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:38:19.456 [2024-11-28 10:10:58.262325] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:19.456 [2024-11-28 10:10:58.262334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.262344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:19.456 [2024-11-28 10:10:58.262353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:38:19.456 [2024-11-28 10:10:58.262361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.274632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.274660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:19.456 [2024-11-28 10:10:58.274671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.257 ms 00:38:19.456 [2024-11-28 10:10:58.274679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.274798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.274807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:19.456 [2024-11-28 10:10:58.274815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:38:19.456 [2024-11-28 10:10:58.274827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.274891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.274901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:19.456 [2024-11-28 10:10:58.274910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:38:19.456 [2024-11-28 10:10:58.274924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.275518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.456 [2024-11-28 10:10:58.275537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:19.456 [2024-11-28 10:10:58.275546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:38:19.456 [2024-11-28 10:10:58.275554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.456 [2024-11-28 10:10:58.275573] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:38:19.456 [2024-11-28 10:10:58.275583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.275590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:19.457 [2024-11-28 10:10:58.275599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:19.457 [2024-11-28 10:10:58.275606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.287613] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:19.457 [2024-11-28 10:10:58.287744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.287754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:19.457 [2024-11-28 10:10:58.287764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.121 ms 00:38:19.457 [2024-11-28 10:10:58.287772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.289926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.289950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:19.457 [2024-11-28 10:10:58.289959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:38:19.457 [2024-11-28 10:10:58.289967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.290031] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:38:19.457 [2024-11-28 10:10:58.290508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.290525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:19.457 [2024-11-28 10:10:58.290535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:38:19.457 [2024-11-28 10:10:58.290542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.290570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.290579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:19.457 [2024-11-28 10:10:58.290587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:19.457 [2024-11-28 10:10:58.290595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.290626] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:19.457 [2024-11-28 10:10:58.290636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.290643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:19.457 [2024-11-28 10:10:58.290652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:19.457 [2024-11-28 10:10:58.290661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.315374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.315409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:19.457 [2024-11-28 10:10:58.315420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.697 ms 00:38:19.457 [2024-11-28 10:10:58.315428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.315499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.457 [2024-11-28 10:10:58.315509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:19.457 [2024-11-28 10:10:58.315518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:38:19.457 [2024-11-28 10:10:58.315525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.457 [2024-11-28 10:10:58.316517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.713 ms, result 0 00:38:20.847  [2024-11-28T10:11:00.673Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-28T10:11:01.618Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-28T10:11:02.564Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-28T10:11:03.959Z] Copying: 45/1024 [MB] (11 MBps) [2024-11-28T10:11:04.609Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-28T10:11:05.560Z] Copying: 67/1024 [MB] (11 MBps) [2024-11-28T10:11:06.951Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-28T10:11:07.525Z] Copying: 91/1024 [MB] (11 MBps) [2024-11-28T10:11:08.914Z] Copying: 102/1024 [MB] (11 MBps) [2024-11-28T10:11:09.859Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-28T10:11:10.802Z] Copying: 126/1024 [MB] (11 MBps) [2024-11-28T10:11:11.745Z] Copying: 138/1024 [MB] (11 MBps) [2024-11-28T10:11:12.689Z] Copying: 149/1024 [MB] (11 MBps) [2024-11-28T10:11:13.634Z] Copying: 161/1024 [MB] (12 MBps) [2024-11-28T10:11:14.578Z] Copying: 173/1024 [MB] (11 MBps) [2024-11-28T10:11:15.522Z] Copying: 185/1024 [MB] (12 MBps) [2024-11-28T10:11:16.908Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-28T10:11:17.852Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-28T10:11:18.797Z] Copying: 219/1024 [MB] (11 MBps) [2024-11-28T10:11:19.743Z] Copying: 231/1024 [MB] (11 MBps) [2024-11-28T10:11:20.687Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-28T10:11:21.633Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-28T10:11:22.579Z] Copying: 266/1024 [MB] (11 MBps) [2024-11-28T10:11:23.525Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-28T10:11:24.915Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-28T10:11:25.860Z] Copying: 299/1024 [MB] (11 MBps) [2024-11-28T10:11:26.806Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-28T10:11:27.773Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-28T10:11:28.716Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-28T10:11:29.660Z] Copying: 346/1024 [MB] (11 MBps) [2024-11-28T10:11:30.606Z] Copying: 357/1024 [MB] (11 MBps) [2024-11-28T10:11:31.552Z] Copying: 368/1024 [MB] (10 MBps) [2024-11-28T10:11:32.943Z] Copying: 380/1024 [MB] (11 MBps) [2024-11-28T10:11:33.516Z] Copying: 391/1024 [MB] (11 MBps) [2024-11-28T10:11:34.905Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-28T10:11:35.850Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-28T10:11:36.524Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-28T10:11:37.930Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-28T10:11:38.876Z] Copying: 449/1024 [MB] (11 MBps) [2024-11-28T10:11:39.822Z] Copying: 461/1024 [MB] (11 MBps) [2024-11-28T10:11:40.767Z] Copying: 472/1024 [MB] (11 MBps) [2024-11-28T10:11:41.711Z] Copying: 484/1024 [MB] (11 MBps) [2024-11-28T10:11:42.655Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-28T10:11:43.600Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-28T10:11:44.544Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-28T10:11:45.930Z] Copying: 529/1024 [MB] (11 MBps) [2024-11-28T10:11:46.876Z] Copying: 540/1024 [MB] (11 MBps) [2024-11-28T10:11:47.819Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-28T10:11:48.766Z] Copying: 563/1024 [MB] (11 MBps) [2024-11-28T10:11:49.709Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-28T10:11:50.655Z] Copying: 586/1024 [MB] (11 MBps) [2024-11-28T10:11:51.601Z] Copying: 597/1024 [MB] (10 MBps) [2024-11-28T10:11:52.550Z] Copying: 608/1024 [MB] (10 MBps) [2024-11-28T10:11:53.939Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-28T10:11:54.513Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-28T10:11:55.901Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-28T10:11:56.846Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-28T10:11:57.791Z] Copying: 665/1024 [MB] (11 MBps) [2024-11-28T10:11:58.737Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-28T10:11:59.683Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-28T10:12:00.628Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-28T10:12:01.575Z] Copying: 710/1024 [MB] (12 MBps) [2024-11-28T10:12:02.519Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-28T10:12:03.909Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-28T10:12:04.853Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-28T10:12:05.797Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-28T10:12:06.744Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-28T10:12:07.687Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-28T10:12:08.717Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-28T10:12:09.662Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-28T10:12:10.607Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-28T10:12:11.553Z] Copying: 823/1024 [MB] (11 MBps) [2024-11-28T10:12:12.941Z] Copying: 835/1024 [MB] (11 MBps) [2024-11-28T10:12:13.515Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-28T10:12:14.905Z] Copying: 859/1024 [MB] (11 MBps) [2024-11-28T10:12:15.849Z] Copying: 870/1024 [MB] (11 MBps) [2024-11-28T10:12:16.794Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-28T10:12:17.739Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-28T10:12:18.683Z] Copying: 903/1024 [MB] (11 MBps) [2024-11-28T10:12:19.628Z] Copying: 915/1024 [MB] (11 MBps) [2024-11-28T10:12:20.572Z] Copying: 926/1024 [MB] (11 MBps) [2024-11-28T10:12:21.513Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-28T10:12:22.899Z] Copying: 950/1024 [MB] (12 MBps) [2024-11-28T10:12:23.845Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-28T10:12:24.789Z] Copying: 974/1024 [MB] (11 MBps) [2024-11-28T10:12:25.732Z] Copying: 985/1024 [MB] (11 MBps) [2024-11-28T10:12:26.702Z] Copying: 997/1024 [MB] (11 MBps) [2024-11-28T10:12:27.646Z] Copying: 1009/1024 [MB] (11 MBps) [2024-11-28T10:12:27.907Z] Copying: 1021/1024 [MB] (11 MBps) [2024-11-28T10:12:28.169Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-28 10:12:27.958564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.289 [2024-11-28 10:12:27.958645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:49.289 [2024-11-28 10:12:27.958666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:49.289 [2024-11-28 10:12:27.958677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.289 [2024-11-28 10:12:27.958706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:49.289 [2024-11-28 10:12:27.963093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.289 [2024-11-28 10:12:27.963132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:49.289 [2024-11-28 10:12:27.963144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.365 ms 00:39:49.289 [2024-11-28 10:12:27.963168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.289 [2024-11-28 10:12:27.963417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.289 [2024-11-28 10:12:27.963431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:49.289 [2024-11-28 10:12:27.963442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:39:49.289 [2024-11-28 10:12:27.963450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.289 [2024-11-28 10:12:27.963482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.289 [2024-11-28 10:12:27.963493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:39:49.289 [2024-11-28 10:12:27.963502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:49.289 [2024-11-28 10:12:27.963512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.289 [2024-11-28 10:12:27.963566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.289 [2024-11-28 10:12:27.963578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:39:49.289 [2024-11-28 10:12:27.963587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:39:49.289 [2024-11-28 10:12:27.963597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.289 [2024-11-28 10:12:27.963613] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:49.289 [2024-11-28 10:12:27.963628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:39:49.289 [2024-11-28 10:12:27.963639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:49.289 [2024-11-28 10:12:27.963648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.963992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:49.290 [2024-11-28 10:12:27.964287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:49.291 [2024-11-28 10:12:27.964526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:49.291 [2024-11-28 10:12:27.964534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98d603d3-a2ee-4c3b-94b8-6abc305400e3 00:39:49.291 [2024-11-28 10:12:27.964543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:39:49.291 [2024-11-28 10:12:27.964551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:39:49.291 [2024-11-28 10:12:27.964559] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:39:49.291 [2024-11-28 10:12:27.964570] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:39:49.291 [2024-11-28 10:12:27.964578] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:49.291 [2024-11-28 10:12:27.964587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:49.291 [2024-11-28 10:12:27.964594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:49.291 [2024-11-28 10:12:27.964601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:49.291 [2024-11-28 10:12:27.964608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:49.291 [2024-11-28 10:12:27.964615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.291 [2024-11-28 10:12:27.964623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:49.291 [2024-11-28 10:12:27.964632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:39:49.291 [2024-11-28 10:12:27.964640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:27.977985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.291 [2024-11-28 10:12:27.978017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:49.291 [2024-11-28 10:12:27.978032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.328 ms 00:39:49.291 [2024-11-28 10:12:27.978039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:27.978354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.291 [2024-11-28 10:12:27.978368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:49.291 [2024-11-28 10:12:27.978376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:39:49.291 [2024-11-28 10:12:27.978382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.006305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.006334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:49.291 [2024-11-28 10:12:28.006342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.006348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.006401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.006407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:49.291 [2024-11-28 10:12:28.006414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.006419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.006464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.006475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:49.291 [2024-11-28 10:12:28.006482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.006489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.006502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.006509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:49.291 [2024-11-28 10:12:28.006516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.006523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.069329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.069364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:49.291 [2024-11-28 10:12:28.069373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.069379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.120131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.120172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:49.291 [2024-11-28 10:12:28.120182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.120188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.291 [2024-11-28 10:12:28.120255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.291 [2024-11-28 10:12:28.120263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:49.291 [2024-11-28 10:12:28.120273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.291 [2024-11-28 10:12:28.120279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.292 [2024-11-28 10:12:28.120308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.292 [2024-11-28 10:12:28.120317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:49.292 [2024-11-28 10:12:28.120324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.292 [2024-11-28 10:12:28.120331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.292 [2024-11-28 10:12:28.120393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.292 [2024-11-28 10:12:28.120402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:49.292 [2024-11-28 10:12:28.120408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.292 [2024-11-28 10:12:28.120416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.292 [2024-11-28 10:12:28.120436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.292 [2024-11-28 10:12:28.120443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:49.292 [2024-11-28 10:12:28.120449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.292 [2024-11-28 10:12:28.120455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.292 [2024-11-28 10:12:28.120489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.292 [2024-11-28 10:12:28.120496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:49.292 [2024-11-28 10:12:28.120503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.292 [2024-11-28 10:12:28.120512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.292 [2024-11-28 10:12:28.120549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:49.292 [2024-11-28 10:12:28.120556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:49.292 [2024-11-28 10:12:28.120563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:49.292 [2024-11-28 10:12:28.120569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.292 [2024-11-28 10:12:28.120680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 162.097 ms, result 0 00:39:49.864 00:39:49.864 00:39:49.864 10:12:28 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:52.414 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:52.414 Process with pid 85647 is not found 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 85647 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85647 ']' 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85647 00:39:52.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85647) - No such process 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 85647 is not found' 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:52.414 Remove shared memory files 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_band_md /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_l2p_l1 /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_l2p_l2 /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_l2p_l2_ctx /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_nvc_md /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_p2l_pool /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_sb /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_sb_shm /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_trim_bitmap /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_trim_log /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_trim_md /dev/hugepages/ftl_98d603d3-a2ee-4c3b-94b8-6abc305400e3_vmap 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:39:52.414 00:39:52.414 real 6m9.708s 00:39:52.414 user 5m58.499s 00:39:52.414 sys 0m11.023s 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:52.414 ************************************ 00:39:52.414 END TEST ftl_restore_fast 00:39:52.414 ************************************ 00:39:52.414 10:12:30 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:39:52.414 10:12:30 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:39:52.414 10:12:30 ftl -- ftl/ftl.sh@14 -- # killprocess 74953 00:39:52.414 Process with pid 74953 is not found 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@954 -- # '[' -z 74953 ']' 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@958 -- # kill -0 74953 00:39:52.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74953) - No such process 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74953 is not found' 00:39:52.414 10:12:30 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:39:52.414 10:12:30 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=89359 00:39:52.414 10:12:30 ftl -- ftl/ftl.sh@20 -- # waitforlisten 89359 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@835 -- # '[' -z 89359 ']' 00:39:52.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:52.414 10:12:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:52.414 10:12:30 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:52.414 [2024-11-28 10:12:31.060532] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:39:52.414 [2024-11-28 10:12:31.060655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89359 ] 00:39:52.414 [2024-11-28 10:12:31.215487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.675 [2024-11-28 10:12:31.308066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.243 10:12:31 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:53.243 10:12:31 ftl -- common/autotest_common.sh@868 -- # return 0 00:39:53.243 10:12:31 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:53.243 nvme0n1 00:39:53.243 10:12:32 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:39:53.243 10:12:32 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:53.243 10:12:32 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:53.503 10:12:32 ftl -- ftl/common.sh@28 -- # stores=33e08e52-5d32-4e9f-b411-635878c9c093 00:39:53.503 10:12:32 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:39:53.503 10:12:32 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33e08e52-5d32-4e9f-b411-635878c9c093 00:39:53.762 10:12:32 ftl -- ftl/ftl.sh@23 -- # killprocess 89359 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 89359 ']' 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@958 -- # kill -0 89359 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@959 -- # uname 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89359 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.762 killing process with pid 89359 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89359' 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@973 -- # kill 89359 00:39:53.762 10:12:32 ftl -- common/autotest_common.sh@978 -- # wait 89359 00:39:55.147 10:12:33 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:55.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:55.147 Waiting for block devices as requested 00:39:55.147 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:55.409 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:55.409 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:39:55.409 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:40:00.704 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:40:00.704 Remove shared memory files 00:40:00.704 10:12:39 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:40:00.704 10:12:39 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:40:00.704 10:12:39 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:40:00.704 10:12:39 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:40:00.704 10:12:39 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:40:00.704 10:12:39 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:40:00.704 10:12:39 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:40:00.704 00:40:00.704 real 22m13.068s 00:40:00.704 user 24m5.377s 00:40:00.704 sys 1m18.409s 00:40:00.704 10:12:39 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:00.704 10:12:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:40:00.704 ************************************ 00:40:00.704 END TEST ftl 00:40:00.704 ************************************ 00:40:00.704 10:12:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:00.704 10:12:39 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:00.704 10:12:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:00.704 10:12:39 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:00.704 10:12:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:00.704 10:12:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:00.704 10:12:39 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:00.704 10:12:39 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:00.704 10:12:39 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:00.704 10:12:39 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:00.704 10:12:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:00.704 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:40:00.704 10:12:39 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:00.704 10:12:39 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:00.704 10:12:39 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:00.704 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:40:02.089 INFO: APP EXITING 00:40:02.089 INFO: killing all VMs 00:40:02.089 INFO: killing vhost app 00:40:02.089 INFO: EXIT DONE 00:40:02.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:02.678 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:40:02.678 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:40:02.678 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:40:02.678 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:40:03.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:03.532 Cleaning 00:40:03.532 Removing: /var/run/dpdk/spdk0/config 00:40:03.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:03.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:03.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:03.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:03.532 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:03.532 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:03.532 Removing: /var/run/dpdk/spdk0 00:40:03.532 Removing: /var/run/dpdk/spdk_pid56927 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57124 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57336 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57435 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57469 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57591 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57604 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57797 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57895 00:40:03.532 Removing: /var/run/dpdk/spdk_pid57981 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58092 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58178 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58217 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58254 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58319 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58425 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58850 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58914 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58966 00:40:03.532 Removing: /var/run/dpdk/spdk_pid58982 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59073 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59089 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59180 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59196 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59249 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59267 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59320 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59338 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59487 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59524 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59607 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59779 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59863 00:40:03.532 Removing: /var/run/dpdk/spdk_pid59900 00:40:03.532 Removing: /var/run/dpdk/spdk_pid60322 00:40:03.532 Removing: /var/run/dpdk/spdk_pid60420 00:40:03.532 Removing: /var/run/dpdk/spdk_pid60529 00:40:03.532 Removing: /var/run/dpdk/spdk_pid60584 00:40:03.532 Removing: /var/run/dpdk/spdk_pid60610 00:40:03.532 Removing: /var/run/dpdk/spdk_pid60688 00:40:03.532 Removing: /var/run/dpdk/spdk_pid61316 00:40:03.532 Removing: /var/run/dpdk/spdk_pid61353 00:40:03.532 Removing: /var/run/dpdk/spdk_pid61825 00:40:03.532 Removing: /var/run/dpdk/spdk_pid61923 00:40:03.532 Removing: /var/run/dpdk/spdk_pid62032 00:40:03.532 Removing: /var/run/dpdk/spdk_pid62085 00:40:03.532 Removing: /var/run/dpdk/spdk_pid62105 00:40:03.532 Removing: /var/run/dpdk/spdk_pid62136 00:40:03.532 Removing: /var/run/dpdk/spdk_pid63971 00:40:03.532 Removing: /var/run/dpdk/spdk_pid64108 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64112 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64124 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64169 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64173 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64185 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64234 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64238 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64250 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64300 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64304 00:40:03.794 Removing: /var/run/dpdk/spdk_pid64316 00:40:03.794 Removing: /var/run/dpdk/spdk_pid65713 00:40:03.794 Removing: /var/run/dpdk/spdk_pid65804 00:40:03.794 Removing: /var/run/dpdk/spdk_pid67209 00:40:03.794 Removing: /var/run/dpdk/spdk_pid68947 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69021 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69102 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69206 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69298 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69388 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69462 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69543 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69647 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69739 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69840 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69912 00:40:03.794 Removing: /var/run/dpdk/spdk_pid69985 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70089 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70186 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70283 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70357 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70433 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70536 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70629 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70725 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70799 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70873 00:40:03.794 Removing: /var/run/dpdk/spdk_pid70942 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71022 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71126 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71212 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71310 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71380 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71454 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71534 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71603 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71706 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71799 00:40:03.794 Removing: /var/run/dpdk/spdk_pid71947 00:40:03.794 Removing: /var/run/dpdk/spdk_pid72231 00:40:03.794 Removing: /var/run/dpdk/spdk_pid72262 00:40:03.794 Removing: /var/run/dpdk/spdk_pid72716 00:40:03.794 Removing: /var/run/dpdk/spdk_pid72901 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73001 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73111 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73159 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73185 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73482 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73537 00:40:03.794 Removing: /var/run/dpdk/spdk_pid73609 00:40:03.794 Removing: /var/run/dpdk/spdk_pid74001 00:40:03.794 Removing: /var/run/dpdk/spdk_pid74148 00:40:03.794 Removing: /var/run/dpdk/spdk_pid74953 00:40:03.794 Removing: /var/run/dpdk/spdk_pid75084 00:40:03.795 Removing: /var/run/dpdk/spdk_pid75252 00:40:03.795 Removing: /var/run/dpdk/spdk_pid75355 00:40:03.795 Removing: /var/run/dpdk/spdk_pid75674 00:40:03.795 Removing: /var/run/dpdk/spdk_pid75933 00:40:03.795 Removing: /var/run/dpdk/spdk_pid76279 00:40:03.795 Removing: /var/run/dpdk/spdk_pid76462 00:40:03.795 Removing: /var/run/dpdk/spdk_pid76631 00:40:03.795 Removing: /var/run/dpdk/spdk_pid76688 00:40:03.795 Removing: /var/run/dpdk/spdk_pid76941 00:40:03.795 Removing: /var/run/dpdk/spdk_pid76966 00:40:03.795 Removing: /var/run/dpdk/spdk_pid77013 00:40:03.795 Removing: /var/run/dpdk/spdk_pid77351 00:40:03.795 Removing: /var/run/dpdk/spdk_pid77578 00:40:03.795 Removing: /var/run/dpdk/spdk_pid78474 00:40:03.795 Removing: /var/run/dpdk/spdk_pid79392 00:40:03.795 Removing: /var/run/dpdk/spdk_pid80377 00:40:03.795 Removing: /var/run/dpdk/spdk_pid81406 00:40:03.795 Removing: /var/run/dpdk/spdk_pid81549 00:40:03.795 Removing: /var/run/dpdk/spdk_pid81633 00:40:03.795 Removing: /var/run/dpdk/spdk_pid81976 00:40:03.795 Removing: /var/run/dpdk/spdk_pid82034 00:40:03.795 Removing: /var/run/dpdk/spdk_pid83015 00:40:03.795 Removing: /var/run/dpdk/spdk_pid83670 00:40:03.795 Removing: /var/run/dpdk/spdk_pid84638 00:40:03.795 Removing: /var/run/dpdk/spdk_pid84760 00:40:03.795 Removing: /var/run/dpdk/spdk_pid84802 00:40:03.795 Removing: /var/run/dpdk/spdk_pid84861 00:40:03.795 Removing: /var/run/dpdk/spdk_pid84912 00:40:03.795 Removing: /var/run/dpdk/spdk_pid84970 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85153 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85235 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85302 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85398 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85434 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85494 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85647 00:40:03.795 Removing: /var/run/dpdk/spdk_pid85878 00:40:04.057 Removing: /var/run/dpdk/spdk_pid86652 00:40:04.057 Removing: /var/run/dpdk/spdk_pid87569 00:40:04.057 Removing: /var/run/dpdk/spdk_pid88408 00:40:04.057 Removing: /var/run/dpdk/spdk_pid89359 00:40:04.057 Clean 00:40:04.057 10:12:42 -- common/autotest_common.sh@1453 -- # return 0 00:40:04.057 10:12:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:04.058 10:12:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.058 10:12:42 -- common/autotest_common.sh@10 -- # set +x 00:40:04.058 10:12:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:04.058 10:12:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.058 10:12:42 -- common/autotest_common.sh@10 -- # set +x 00:40:04.058 10:12:42 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:04.058 10:12:42 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:40:04.058 10:12:42 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:40:04.058 10:12:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:04.058 10:12:42 -- spdk/autotest.sh@398 -- # hostname 00:40:04.058 10:12:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:40:04.319 geninfo: WARNING: invalid characters removed from testname! 00:40:30.926 10:13:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:33.475 10:13:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:36.025 10:13:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:38.574 10:13:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:40.486 10:13:19 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:43.036 10:13:21 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:45.579 10:13:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:45.579 10:13:23 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:45.579 10:13:23 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:40:45.579 10:13:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:45.579 10:13:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:45.579 10:13:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:45.579 + [[ -n 5035 ]] 00:40:45.579 + sudo kill 5035 00:40:45.590 [Pipeline] } 00:40:45.606 [Pipeline] // timeout 00:40:45.612 [Pipeline] } 00:40:45.626 [Pipeline] // stage 00:40:45.632 [Pipeline] } 00:40:45.646 [Pipeline] // catchError 00:40:45.656 [Pipeline] stage 00:40:45.658 [Pipeline] { (Stop VM) 00:40:45.671 [Pipeline] sh 00:40:45.956 + vagrant halt 00:40:48.500 ==> default: Halting domain... 00:40:53.803 [Pipeline] sh 00:40:54.087 + vagrant destroy -f 00:40:56.634 ==> default: Removing domain... 00:40:57.221 [Pipeline] sh 00:40:57.505 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:40:57.516 [Pipeline] } 00:40:57.535 [Pipeline] // stage 00:40:57.540 [Pipeline] } 00:40:57.558 [Pipeline] // dir 00:40:57.564 [Pipeline] } 00:40:57.581 [Pipeline] // wrap 00:40:57.588 [Pipeline] } 00:40:57.603 [Pipeline] // catchError 00:40:57.614 [Pipeline] stage 00:40:57.617 [Pipeline] { (Epilogue) 00:40:57.633 [Pipeline] sh 00:40:58.013 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:03.307 [Pipeline] catchError 00:41:03.309 [Pipeline] { 00:41:03.322 [Pipeline] sh 00:41:03.607 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:03.607 Artifacts sizes are good 00:41:03.617 [Pipeline] } 00:41:03.631 [Pipeline] // catchError 00:41:03.643 [Pipeline] archiveArtifacts 00:41:03.651 Archiving artifacts 00:41:03.758 [Pipeline] cleanWs 00:41:03.771 [WS-CLEANUP] Deleting project workspace... 00:41:03.771 [WS-CLEANUP] Deferred wipeout is used... 00:41:03.779 [WS-CLEANUP] done 00:41:03.781 [Pipeline] } 00:41:03.797 [Pipeline] // stage 00:41:03.802 [Pipeline] } 00:41:03.817 [Pipeline] // node 00:41:03.822 [Pipeline] End of Pipeline 00:41:03.860 Finished: SUCCESS