00:00:00.001 Started by upstream project "autotest-nightly" build number 4304 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3667 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.144 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.145 The recommended git tool is: git 00:00:00.145 using credential 00000000-0000-0000-0000-000000000002 00:00:00.147 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.206 Fetching changes from the remote Git repository 00:00:00.208 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.259 Using shallow fetch with depth 1 00:00:00.259 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.259 > git --version # timeout=10 00:00:00.311 > git --version # 'git version 2.39.2' 00:00:00.311 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.339 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.339 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.709 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.720 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.733 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.733 > git config core.sparsecheckout # timeout=10 00:00:07.743 > git read-tree -mu HEAD # timeout=10 00:00:07.759 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.785 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.786 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.893 [Pipeline] Start of Pipeline 00:00:07.906 [Pipeline] library 00:00:07.909 Loading library shm_lib@master 00:00:07.909 Library shm_lib@master is cached. Copying from home. 00:00:07.924 [Pipeline] node 00:00:07.942 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.944 [Pipeline] { 00:00:07.954 [Pipeline] catchError 00:00:07.957 [Pipeline] { 00:00:07.970 [Pipeline] wrap 00:00:07.980 [Pipeline] { 00:00:07.985 [Pipeline] stage 00:00:07.987 [Pipeline] { (Prologue) 00:00:08.001 [Pipeline] echo 00:00:08.002 Node: VM-host-SM38 00:00:08.008 [Pipeline] cleanWs 00:00:08.020 [WS-CLEANUP] Deleting project workspace... 00:00:08.020 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.026 [WS-CLEANUP] done 00:00:08.234 [Pipeline] setCustomBuildProperty 00:00:08.362 [Pipeline] httpRequest 00:00:10.755 [Pipeline] echo 00:00:10.756 Sorcerer 10.211.164.101 is alive 00:00:10.764 [Pipeline] retry 00:00:10.766 [Pipeline] { 00:00:10.776 [Pipeline] httpRequest 00:00:10.782 HttpMethod: GET 00:00:10.783 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.783 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.805 Response Code: HTTP/1.1 200 OK 00:00:10.806 Success: Status code 200 is in the accepted range: 200,404 00:00:10.806 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.668 [Pipeline] } 00:00:14.687 [Pipeline] // retry 00:00:14.695 [Pipeline] sh 00:00:14.983 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.002 [Pipeline] httpRequest 00:00:17.055 [Pipeline] echo 00:00:17.056 Sorcerer 10.211.164.101 is alive 00:00:17.065 [Pipeline] retry 00:00:17.066 [Pipeline] { 00:00:17.075 [Pipeline] httpRequest 00:00:17.079 HttpMethod: GET 00:00:17.080 URL: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:17.081 Sending request to url: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:17.105 Response Code: HTTP/1.1 200 OK 00:00:17.106 Success: Status code 200 is in the accepted range: 200,404 00:00:17.106 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:01:18.165 [Pipeline] } 00:01:18.186 [Pipeline] // retry 00:01:18.194 [Pipeline] sh 00:01:18.477 + tar --no-same-owner -xf spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:01:21.059 [Pipeline] sh 00:01:21.338 + git -C spdk log --oneline -n5 00:01:21.338 2a91567e4 CHANGELOG.md: corrected typo 00:01:21.338 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:01:21.338 414f91a0c lib/nvmf: Fix double free of connect request 00:01:21.338 d8f6e798d nvme: Fix discovery loop when target has no entry 00:01:21.338 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:01:21.358 [Pipeline] writeFile 00:01:21.372 [Pipeline] sh 00:01:21.657 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.670 [Pipeline] sh 00:01:21.958 + cat autorun-spdk.conf 00:01:21.958 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.958 SPDK_TEST_NVME=1 00:01:21.958 SPDK_TEST_FTL=1 00:01:21.958 SPDK_TEST_ISAL=1 00:01:21.958 SPDK_RUN_ASAN=1 00:01:21.958 SPDK_RUN_UBSAN=1 00:01:21.958 SPDK_TEST_XNVME=1 00:01:21.958 SPDK_TEST_NVME_FDP=1 00:01:21.958 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.967 RUN_NIGHTLY=1 00:01:21.969 [Pipeline] } 00:01:21.982 [Pipeline] // stage 00:01:21.997 [Pipeline] stage 00:01:21.999 [Pipeline] { (Run VM) 00:01:22.011 [Pipeline] sh 00:01:22.294 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:22.294 + echo 'Start stage prepare_nvme.sh' 00:01:22.294 Start stage prepare_nvme.sh 00:01:22.294 + [[ -n 10 ]] 00:01:22.294 + disk_prefix=ex10 00:01:22.294 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:22.294 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:22.294 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:22.294 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.294 ++ SPDK_TEST_NVME=1 00:01:22.294 ++ SPDK_TEST_FTL=1 00:01:22.294 ++ SPDK_TEST_ISAL=1 00:01:22.294 ++ SPDK_RUN_ASAN=1 00:01:22.294 ++ SPDK_RUN_UBSAN=1 00:01:22.294 ++ SPDK_TEST_XNVME=1 00:01:22.294 ++ SPDK_TEST_NVME_FDP=1 00:01:22.294 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.294 ++ RUN_NIGHTLY=1 00:01:22.294 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:22.294 + nvme_files=() 00:01:22.294 + declare -A nvme_files 00:01:22.294 + backend_dir=/var/lib/libvirt/images/backends 00:01:22.294 + nvme_files['nvme.img']=5G 00:01:22.294 + nvme_files['nvme-cmb.img']=5G 00:01:22.294 + nvme_files['nvme-multi0.img']=4G 00:01:22.294 + nvme_files['nvme-multi1.img']=4G 00:01:22.294 + nvme_files['nvme-multi2.img']=4G 00:01:22.294 + nvme_files['nvme-openstack.img']=8G 00:01:22.294 + nvme_files['nvme-zns.img']=5G 00:01:22.294 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:22.294 + (( SPDK_TEST_FTL == 1 )) 00:01:22.294 + nvme_files["nvme-ftl.img"]=6G 00:01:22.294 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:22.294 + nvme_files["nvme-fdp.img"]=1G 00:01:22.294 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:22.294 + for nvme in "${!nvme_files[@]}" 00:01:22.294 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:22.294 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.294 + for nvme in "${!nvme_files[@]}" 00:01:22.294 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:22.866 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:22.866 + for nvme in "${!nvme_files[@]}" 00:01:22.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:22.866 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.866 + for nvme in "${!nvme_files[@]}" 00:01:22.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:22.866 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:22.866 + for nvme in "${!nvme_files[@]}" 00:01:22.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:22.866 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.866 + for nvme in "${!nvme_files[@]}" 00:01:22.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:23.128 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.128 + for nvme in "${!nvme_files[@]}" 00:01:23.128 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:23.128 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.128 + for nvme in "${!nvme_files[@]}" 00:01:23.128 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:23.128 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:23.128 + for nvme in "${!nvme_files[@]}" 00:01:23.128 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:23.390 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.390 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:23.390 + echo 'End stage prepare_nvme.sh' 00:01:23.390 End stage prepare_nvme.sh 00:01:23.405 [Pipeline] sh 00:01:23.687 + DISTRO=fedora39 00:01:23.687 + CPUS=10 00:01:23.687 + RAM=12288 00:01:23.687 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.687 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:23.687 00:01:23.687 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:23.687 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:23.687 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:23.687 HELP=0 00:01:23.687 DRY_RUN=0 00:01:23.688 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:23.688 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:23.688 NVME_AUTO_CREATE=0 00:01:23.688 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:23.688 NVME_CMB=,,,, 00:01:23.688 NVME_PMR=,,,, 00:01:23.688 NVME_ZNS=,,,, 00:01:23.688 NVME_MS=true,,,, 00:01:23.688 NVME_FDP=,,,on, 00:01:23.688 SPDK_VAGRANT_DISTRO=fedora39 00:01:23.688 SPDK_VAGRANT_VMCPU=10 00:01:23.688 SPDK_VAGRANT_VMRAM=12288 00:01:23.688 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.688 SPDK_VAGRANT_HTTP_PROXY= 00:01:23.688 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.688 SPDK_OPENSTACK_NETWORK=0 00:01:23.688 VAGRANT_PACKAGE_BOX=0 00:01:23.688 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.688 FORCE_DISTRO=true 00:01:23.688 VAGRANT_BOX_VERSION= 00:01:23.688 EXTRA_VAGRANTFILES= 00:01:23.688 NIC_MODEL=e1000 00:01:23.688 00:01:23.688 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:23.688 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:26.232 Bringing machine 'default' up with 'libvirt' provider... 00:01:26.806 ==> default: Creating image (snapshot of base box volume). 00:01:26.806 ==> default: Creating domain with the following settings... 00:01:26.806 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732575778_e633c17ecb33e4a93304 00:01:26.806 ==> default: -- Domain type: kvm 00:01:26.806 ==> default: -- Cpus: 10 00:01:26.806 ==> default: -- Feature: acpi 00:01:26.806 ==> default: -- Feature: apic 00:01:26.806 ==> default: -- Feature: pae 00:01:26.806 ==> default: -- Memory: 12288M 00:01:26.806 ==> default: -- Memory Backing: hugepages: 00:01:26.806 ==> default: -- Management MAC: 00:01:26.806 ==> default: -- Loader: 00:01:26.806 ==> default: -- Nvram: 00:01:26.806 ==> default: -- Base box: spdk/fedora39 00:01:26.806 ==> default: -- Storage pool: default 00:01:26.806 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732575778_e633c17ecb33e4a93304.img (20G) 00:01:26.806 ==> default: -- Volume Cache: default 00:01:26.806 ==> default: -- Kernel: 00:01:26.806 ==> default: -- Initrd: 00:01:26.806 ==> default: -- Graphics Type: vnc 00:01:26.806 ==> default: -- Graphics Port: -1 00:01:26.806 ==> default: -- Graphics IP: 127.0.0.1 00:01:26.806 ==> default: -- Graphics Password: Not defined 00:01:26.806 ==> default: -- Video Type: cirrus 00:01:26.806 ==> default: -- Video VRAM: 9216 00:01:26.806 ==> default: -- Sound Type: 00:01:26.806 ==> default: -- Keymap: en-us 00:01:26.806 ==> default: -- TPM Path: 00:01:26.806 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:26.806 ==> default: -- Command line args: 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:26.806 ==> default: -> value=-drive, 00:01:26.806 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:26.806 ==> default: -> value=-drive, 00:01:26.806 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:26.806 ==> default: -> value=-drive, 00:01:26.806 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.806 ==> default: -> value=-drive, 00:01:26.806 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.806 ==> default: -> value=-drive, 00:01:26.806 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:26.806 ==> default: -> value=-drive, 00:01:26.806 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:26.806 ==> default: -> value=-device, 00:01:26.806 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.067 ==> default: Creating shared folders metadata... 00:01:27.067 ==> default: Starting domain. 00:01:28.475 ==> default: Waiting for domain to get an IP address... 00:01:46.600 ==> default: Waiting for SSH to become available... 00:01:46.600 ==> default: Configuring and enabling network interfaces... 00:01:50.806 default: SSH address: 192.168.121.178:22 00:01:50.806 default: SSH username: vagrant 00:01:50.806 default: SSH auth method: private key 00:01:52.831 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:59.416 ==> default: Mounting SSHFS shared folder... 00:02:00.801 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.801 ==> default: Checking Mount.. 00:02:01.733 ==> default: Folder Successfully Mounted! 00:02:01.733 00:02:01.733 SUCCESS! 00:02:01.733 00:02:01.733 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:01.733 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.733 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:01.733 00:02:01.741 [Pipeline] } 00:02:01.759 [Pipeline] // stage 00:02:01.765 [Pipeline] dir 00:02:01.765 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:01.766 [Pipeline] { 00:02:01.774 [Pipeline] catchError 00:02:01.775 [Pipeline] { 00:02:01.783 [Pipeline] sh 00:02:02.058 + vagrant ssh-config --host vagrant 00:02:02.058 + sed -ne '/^Host/,$p' 00:02:02.058 + tee ssh_conf 00:02:04.615 Host vagrant 00:02:04.615 HostName 192.168.121.178 00:02:04.615 User vagrant 00:02:04.615 Port 22 00:02:04.615 UserKnownHostsFile /dev/null 00:02:04.615 StrictHostKeyChecking no 00:02:04.615 PasswordAuthentication no 00:02:04.615 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.615 IdentitiesOnly yes 00:02:04.615 LogLevel FATAL 00:02:04.615 ForwardAgent yes 00:02:04.615 ForwardX11 yes 00:02:04.615 00:02:04.628 [Pipeline] withEnv 00:02:04.630 [Pipeline] { 00:02:04.642 [Pipeline] sh 00:02:04.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:04.914 source /etc/os-release 00:02:04.914 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.914 # Minimal, systemd-like check. 00:02:04.914 if [[ -e /.dockerenv ]]; then 00:02:04.914 # Clear garbage from the node'\''s name: 00:02:04.914 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.914 # $HOSTNAME is the actual container id 00:02:04.914 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.914 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.914 # We can assume this is a mount from a host where container is running, 00:02:04.914 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.914 container="$(< /etc/hostname) ($agent)" 00:02:04.914 else 00:02:04.914 # Fallback 00:02:04.914 container=$agent 00:02:04.915 fi 00:02:04.915 fi 00:02:04.915 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.915 ' 00:02:04.923 [Pipeline] } 00:02:04.937 [Pipeline] // withEnv 00:02:04.945 [Pipeline] setCustomBuildProperty 00:02:04.958 [Pipeline] stage 00:02:04.960 [Pipeline] { (Tests) 00:02:04.974 [Pipeline] sh 00:02:05.245 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.257 [Pipeline] sh 00:02:05.530 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.546 [Pipeline] timeout 00:02:05.546 Timeout set to expire in 50 min 00:02:05.548 [Pipeline] { 00:02:05.563 [Pipeline] sh 00:02:05.837 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:06.141 HEAD is now at 2a91567e4 CHANGELOG.md: corrected typo 00:02:06.155 [Pipeline] sh 00:02:06.432 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:06.700 [Pipeline] sh 00:02:06.982 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.001 [Pipeline] sh 00:02:07.274 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:07.539 ++ readlink -f spdk_repo 00:02:07.539 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.539 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.539 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.539 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.539 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.539 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.539 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.539 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:07.539 + cd /home/vagrant/spdk_repo 00:02:07.539 + source /etc/os-release 00:02:07.539 ++ NAME='Fedora Linux' 00:02:07.539 ++ VERSION='39 (Cloud Edition)' 00:02:07.539 ++ ID=fedora 00:02:07.539 ++ VERSION_ID=39 00:02:07.539 ++ VERSION_CODENAME= 00:02:07.539 ++ PLATFORM_ID=platform:f39 00:02:07.539 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.539 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.539 ++ LOGO=fedora-logo-icon 00:02:07.539 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.539 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.539 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.539 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.539 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.539 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.539 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.539 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.539 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.539 ++ SUPPORT_END=2024-11-12 00:02:07.539 ++ VARIANT='Cloud Edition' 00:02:07.539 ++ VARIANT_ID=cloud 00:02:07.539 + uname -a 00:02:07.539 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.539 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:08.071 Hugepages 00:02:08.071 node hugesize free / total 00:02:08.071 node0 1048576kB 0 / 0 00:02:08.071 node0 2048kB 0 / 0 00:02:08.071 00:02:08.071 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.071 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.071 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.071 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:08.071 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:08.071 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:08.071 + rm -f /tmp/spdk-ld-path 00:02:08.072 + source autorun-spdk.conf 00:02:08.072 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.072 ++ SPDK_TEST_NVME=1 00:02:08.072 ++ SPDK_TEST_FTL=1 00:02:08.072 ++ SPDK_TEST_ISAL=1 00:02:08.072 ++ SPDK_RUN_ASAN=1 00:02:08.072 ++ SPDK_RUN_UBSAN=1 00:02:08.072 ++ SPDK_TEST_XNVME=1 00:02:08.072 ++ SPDK_TEST_NVME_FDP=1 00:02:08.072 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.072 ++ RUN_NIGHTLY=1 00:02:08.072 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.072 + [[ -n '' ]] 00:02:08.072 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.072 + for M in /var/spdk/build-*-manifest.txt 00:02:08.072 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.072 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.072 + for M in /var/spdk/build-*-manifest.txt 00:02:08.072 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.072 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.072 + for M in /var/spdk/build-*-manifest.txt 00:02:08.072 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.072 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.072 ++ uname 00:02:08.072 + [[ Linux == \L\i\n\u\x ]] 00:02:08.072 + sudo dmesg -T 00:02:08.072 + sudo dmesg --clear 00:02:08.072 + dmesg_pid=5029 00:02:08.072 + [[ Fedora Linux == FreeBSD ]] 00:02:08.072 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.072 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.072 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.072 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.072 + sudo dmesg -Tw 00:02:08.072 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.072 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.072 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.072 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.072 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.072 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.072 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.072 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.072 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.072 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.072 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.338 23:03:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.338 23:03:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.338 23:03:40 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:08.338 23:03:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:08.338 23:03:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.338 23:03:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.338 23:03:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.338 23:03:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.338 23:03:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.338 23:03:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.338 23:03:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.338 23:03:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.338 23:03:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.338 23:03:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.338 23:03:40 -- paths/export.sh@5 -- $ export PATH 00:02:08.338 23:03:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.338 23:03:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.338 23:03:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:08.338 23:03:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732575820.XXXXXX 00:02:08.338 23:03:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732575820.vPqMFs 00:02:08.338 23:03:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:08.338 23:03:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:08.338 23:03:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.338 23:03:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.338 23:03:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.338 23:03:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:08.338 23:03:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:08.338 23:03:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.338 23:03:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:08.338 23:03:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:08.338 23:03:40 -- pm/common@17 -- $ local monitor 00:02:08.338 23:03:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.338 23:03:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.338 23:03:40 -- pm/common@25 -- $ sleep 1 00:02:08.338 23:03:40 -- pm/common@21 -- $ date +%s 00:02:08.338 23:03:40 -- pm/common@21 -- $ date +%s 00:02:08.338 23:03:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732575820 00:02:08.338 23:03:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732575820 00:02:08.338 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732575820_collect-cpu-load.pm.log 00:02:08.338 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732575820_collect-vmstat.pm.log 00:02:09.274 23:03:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:09.274 23:03:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.274 23:03:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.274 23:03:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.274 23:03:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.274 Mon Nov 25 11:03:41 PM UTC 2024 00:02:09.274 23:03:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.274 v25.01-pre-240-g2a91567e4 00:02:09.274 23:03:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:09.274 23:03:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:09.274 23:03:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.274 23:03:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.274 23:03:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.274 ************************************ 00:02:09.274 START TEST asan 00:02:09.274 ************************************ 00:02:09.274 using asan 00:02:09.274 23:03:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:09.274 00:02:09.274 real 0m0.000s 00:02:09.274 user 0m0.000s 00:02:09.274 sys 0m0.000s 00:02:09.274 23:03:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.274 ************************************ 00:02:09.274 END TEST asan 00:02:09.274 ************************************ 00:02:09.274 23:03:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.274 23:03:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.274 23:03:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.274 23:03:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.274 23:03:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.274 23:03:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.274 ************************************ 00:02:09.274 START TEST ubsan 00:02:09.274 ************************************ 00:02:09.274 using ubsan 00:02:09.274 23:03:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:09.274 00:02:09.274 real 0m0.000s 00:02:09.274 user 0m0.000s 00:02:09.274 sys 0m0.000s 00:02:09.274 23:03:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.274 ************************************ 00:02:09.274 END TEST ubsan 00:02:09.274 ************************************ 00:02:09.274 23:03:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.274 23:03:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.274 23:03:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.274 23:03:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.274 23:03:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.274 23:03:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.274 23:03:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.274 23:03:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.274 23:03:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.274 23:03:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:09.533 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.533 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.803 Using 'verbs' RDMA provider 00:02:20.714 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:30.699 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:30.957 Creating mk/config.mk...done. 00:02:30.957 Creating mk/cc.flags.mk...done. 00:02:30.957 Type 'make' to build. 00:02:30.957 23:04:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:30.957 23:04:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.957 23:04:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.957 23:04:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.957 ************************************ 00:02:30.957 START TEST make 00:02:30.957 ************************************ 00:02:30.957 23:04:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:31.215 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:31.215 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:31.215 meson setup builddir \ 00:02:31.215 -Dwith-libaio=enabled \ 00:02:31.215 -Dwith-liburing=enabled \ 00:02:31.215 -Dwith-libvfn=disabled \ 00:02:31.215 -Dwith-spdk=disabled \ 00:02:31.215 -Dexamples=false \ 00:02:31.215 -Dtests=false \ 00:02:31.215 -Dtools=false && \ 00:02:31.215 meson compile -C builddir && \ 00:02:31.215 cd -) 00:02:31.215 make[1]: Nothing to be done for 'all'. 00:02:33.745 The Meson build system 00:02:33.745 Version: 1.5.0 00:02:33.745 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:33.745 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.745 Build type: native build 00:02:33.745 Project name: xnvme 00:02:33.745 Project version: 0.7.5 00:02:33.745 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.745 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.745 Host machine cpu family: x86_64 00:02:33.745 Host machine cpu: x86_64 00:02:33.745 Message: host_machine.system: linux 00:02:33.745 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:33.745 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:33.745 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.745 Run-time dependency threads found: YES 00:02:33.745 Has header "setupapi.h" : NO 00:02:33.745 Has header "linux/blkzoned.h" : YES 00:02:33.745 Has header "linux/blkzoned.h" : YES (cached) 00:02:33.745 Has header "libaio.h" : YES 00:02:33.745 Library aio found: YES 00:02:33.745 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.745 Run-time dependency liburing found: YES 2.2 00:02:33.745 Dependency libvfn skipped: feature with-libvfn disabled 00:02:33.745 Found CMake: /usr/bin/cmake (3.27.7) 00:02:33.745 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:33.745 Subproject spdk : skipped: feature with-spdk disabled 00:02:33.745 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.745 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.745 Library rt found: YES 00:02:33.745 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:33.745 Configuring xnvme_config.h using configuration 00:02:33.745 Configuring xnvme.spec using configuration 00:02:33.745 Run-time dependency bash-completion found: YES 2.11 00:02:33.745 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:33.745 Program cp found: YES (/usr/bin/cp) 00:02:33.745 Build targets in project: 3 00:02:33.745 00:02:33.745 xnvme 0.7.5 00:02:33.745 00:02:33.745 Subprojects 00:02:33.745 spdk : NO Feature 'with-spdk' disabled 00:02:33.745 00:02:33.745 User defined options 00:02:33.745 examples : false 00:02:33.745 tests : false 00:02:33.745 tools : false 00:02:33.745 with-libaio : enabled 00:02:33.745 with-liburing: enabled 00:02:33.745 with-libvfn : disabled 00:02:33.745 with-spdk : disabled 00:02:33.745 00:02:33.745 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.745 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:33.745 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:33.745 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:33.745 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:33.745 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:33.745 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:33.745 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:33.745 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.004 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:34.004 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:34.004 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:34.004 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:34.004 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:34.004 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:34.004 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:34.004 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:34.004 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:34.004 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:34.004 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:34.004 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:34.004 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:34.004 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:34.004 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:34.004 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:34.004 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:34.004 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:34.004 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:34.004 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:34.004 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:34.004 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:34.004 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:34.004 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:34.004 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:34.004 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:34.004 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:34.004 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:34.262 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:34.262 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:34.262 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:34.262 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:34.262 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:34.262 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:34.262 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:34.262 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:34.262 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:34.262 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:34.262 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:34.262 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:34.262 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:34.263 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:34.263 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:34.263 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:34.263 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:34.263 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:34.263 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:34.263 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:34.263 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:34.263 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:34.263 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:34.263 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:34.263 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:34.263 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:34.263 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:34.263 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:34.263 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:34.263 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:34.521 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:34.521 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:34.521 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:34.521 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:34.521 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:34.521 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:34.521 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:34.521 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:34.779 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:34.779 [75/76] Linking static target lib/libxnvme.a 00:02:34.779 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:34.779 INFO: autodetecting backend as ninja 00:02:34.779 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:35.037 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:41.599 The Meson build system 00:02:41.599 Version: 1.5.0 00:02:41.599 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.599 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.599 Build type: native build 00:02:41.599 Program cat found: YES (/usr/bin/cat) 00:02:41.599 Project name: DPDK 00:02:41.599 Project version: 24.03.0 00:02:41.599 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.599 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.599 Host machine cpu family: x86_64 00:02:41.599 Host machine cpu: x86_64 00:02:41.599 Message: ## Building in Developer Mode ## 00:02:41.599 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.599 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.599 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.599 Program python3 found: YES (/usr/bin/python3) 00:02:41.599 Program cat found: YES (/usr/bin/cat) 00:02:41.599 Compiler for C supports arguments -march=native: YES 00:02:41.599 Checking for size of "void *" : 8 00:02:41.599 Checking for size of "void *" : 8 (cached) 00:02:41.599 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:41.599 Library m found: YES 00:02:41.599 Library numa found: YES 00:02:41.599 Has header "numaif.h" : YES 00:02:41.599 Library fdt found: NO 00:02:41.599 Library execinfo found: NO 00:02:41.599 Has header "execinfo.h" : YES 00:02:41.599 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.599 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.599 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.599 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.599 Run-time dependency openssl found: YES 3.1.1 00:02:41.599 Run-time dependency libpcap found: YES 1.10.4 00:02:41.599 Has header "pcap.h" with dependency libpcap: YES 00:02:41.599 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.599 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.599 Compiler for C supports arguments -Wformat: YES 00:02:41.599 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.599 Compiler for C supports arguments -Wformat-security: NO 00:02:41.599 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.599 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.599 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.599 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.599 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.599 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.599 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.599 Compiler for C supports arguments -Wundef: YES 00:02:41.599 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.599 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.599 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.599 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.599 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.599 Program objdump found: YES (/usr/bin/objdump) 00:02:41.599 Compiler for C supports arguments -mavx512f: YES 00:02:41.599 Checking if "AVX512 checking" compiles: YES 00:02:41.599 Fetching value of define "__SSE4_2__" : 1 00:02:41.599 Fetching value of define "__AES__" : 1 00:02:41.599 Fetching value of define "__AVX__" : 1 00:02:41.599 Fetching value of define "__AVX2__" : 1 00:02:41.599 Fetching value of define "__AVX512BW__" : 1 00:02:41.599 Fetching value of define "__AVX512CD__" : 1 00:02:41.599 Fetching value of define "__AVX512DQ__" : 1 00:02:41.599 Fetching value of define "__AVX512F__" : 1 00:02:41.599 Fetching value of define "__AVX512VL__" : 1 00:02:41.599 Fetching value of define "__PCLMUL__" : 1 00:02:41.599 Fetching value of define "__RDRND__" : 1 00:02:41.599 Fetching value of define "__RDSEED__" : 1 00:02:41.599 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:41.599 Fetching value of define "__znver1__" : (undefined) 00:02:41.599 Fetching value of define "__znver2__" : (undefined) 00:02:41.599 Fetching value of define "__znver3__" : (undefined) 00:02:41.599 Fetching value of define "__znver4__" : (undefined) 00:02:41.599 Library asan found: YES 00:02:41.599 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.599 Message: lib/log: Defining dependency "log" 00:02:41.599 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.599 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.599 Library rt found: YES 00:02:41.599 Checking for function "getentropy" : NO 00:02:41.599 Message: lib/eal: Defining dependency "eal" 00:02:41.599 Message: lib/ring: Defining dependency "ring" 00:02:41.599 Message: lib/rcu: Defining dependency "rcu" 00:02:41.599 Message: lib/mempool: Defining dependency "mempool" 00:02:41.599 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.599 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.599 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.599 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.599 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:41.599 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:41.599 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:41.599 Compiler for C supports arguments -mpclmul: YES 00:02:41.599 Compiler for C supports arguments -maes: YES 00:02:41.599 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.599 Compiler for C supports arguments -mavx512bw: YES 00:02:41.599 Compiler for C supports arguments -mavx512dq: YES 00:02:41.599 Compiler for C supports arguments -mavx512vl: YES 00:02:41.599 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.599 Compiler for C supports arguments -mavx2: YES 00:02:41.599 Compiler for C supports arguments -mavx: YES 00:02:41.599 Message: lib/net: Defining dependency "net" 00:02:41.599 Message: lib/meter: Defining dependency "meter" 00:02:41.599 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.599 Message: lib/pci: Defining dependency "pci" 00:02:41.599 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.599 Message: lib/hash: Defining dependency "hash" 00:02:41.599 Message: lib/timer: Defining dependency "timer" 00:02:41.599 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.599 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.599 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.599 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.599 Message: lib/power: Defining dependency "power" 00:02:41.599 Message: lib/reorder: Defining dependency "reorder" 00:02:41.599 Message: lib/security: Defining dependency "security" 00:02:41.599 Has header "linux/userfaultfd.h" : YES 00:02:41.599 Has header "linux/vduse.h" : YES 00:02:41.599 Message: lib/vhost: Defining dependency "vhost" 00:02:41.599 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.599 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.599 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.599 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.599 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.599 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.599 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.599 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.599 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.599 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.599 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.599 Configuring doxy-api-html.conf using configuration 00:02:41.599 Configuring doxy-api-man.conf using configuration 00:02:41.599 Program mandb found: YES (/usr/bin/mandb) 00:02:41.599 Program sphinx-build found: NO 00:02:41.599 Configuring rte_build_config.h using configuration 00:02:41.599 Message: 00:02:41.599 ================= 00:02:41.599 Applications Enabled 00:02:41.599 ================= 00:02:41.599 00:02:41.599 apps: 00:02:41.599 00:02:41.599 00:02:41.599 Message: 00:02:41.599 ================= 00:02:41.599 Libraries Enabled 00:02:41.600 ================= 00:02:41.600 00:02:41.600 libs: 00:02:41.600 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.600 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.600 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.600 00:02:41.600 Message: 00:02:41.600 =============== 00:02:41.600 Drivers Enabled 00:02:41.600 =============== 00:02:41.600 00:02:41.600 common: 00:02:41.600 00:02:41.600 bus: 00:02:41.600 pci, vdev, 00:02:41.600 mempool: 00:02:41.600 ring, 00:02:41.600 dma: 00:02:41.600 00:02:41.600 net: 00:02:41.600 00:02:41.600 crypto: 00:02:41.600 00:02:41.600 compress: 00:02:41.600 00:02:41.600 vdpa: 00:02:41.600 00:02:41.600 00:02:41.600 Message: 00:02:41.600 ================= 00:02:41.600 Content Skipped 00:02:41.600 ================= 00:02:41.600 00:02:41.600 apps: 00:02:41.600 dumpcap: explicitly disabled via build config 00:02:41.600 graph: explicitly disabled via build config 00:02:41.600 pdump: explicitly disabled via build config 00:02:41.600 proc-info: explicitly disabled via build config 00:02:41.600 test-acl: explicitly disabled via build config 00:02:41.600 test-bbdev: explicitly disabled via build config 00:02:41.600 test-cmdline: explicitly disabled via build config 00:02:41.600 test-compress-perf: explicitly disabled via build config 00:02:41.600 test-crypto-perf: explicitly disabled via build config 00:02:41.600 test-dma-perf: explicitly disabled via build config 00:02:41.600 test-eventdev: explicitly disabled via build config 00:02:41.600 test-fib: explicitly disabled via build config 00:02:41.600 test-flow-perf: explicitly disabled via build config 00:02:41.600 test-gpudev: explicitly disabled via build config 00:02:41.600 test-mldev: explicitly disabled via build config 00:02:41.600 test-pipeline: explicitly disabled via build config 00:02:41.600 test-pmd: explicitly disabled via build config 00:02:41.600 test-regex: explicitly disabled via build config 00:02:41.600 test-sad: explicitly disabled via build config 00:02:41.600 test-security-perf: explicitly disabled via build config 00:02:41.600 00:02:41.600 libs: 00:02:41.600 argparse: explicitly disabled via build config 00:02:41.600 metrics: explicitly disabled via build config 00:02:41.600 acl: explicitly disabled via build config 00:02:41.600 bbdev: explicitly disabled via build config 00:02:41.600 bitratestats: explicitly disabled via build config 00:02:41.600 bpf: explicitly disabled via build config 00:02:41.600 cfgfile: explicitly disabled via build config 00:02:41.600 distributor: explicitly disabled via build config 00:02:41.600 efd: explicitly disabled via build config 00:02:41.600 eventdev: explicitly disabled via build config 00:02:41.600 dispatcher: explicitly disabled via build config 00:02:41.600 gpudev: explicitly disabled via build config 00:02:41.600 gro: explicitly disabled via build config 00:02:41.600 gso: explicitly disabled via build config 00:02:41.600 ip_frag: explicitly disabled via build config 00:02:41.600 jobstats: explicitly disabled via build config 00:02:41.600 latencystats: explicitly disabled via build config 00:02:41.600 lpm: explicitly disabled via build config 00:02:41.600 member: explicitly disabled via build config 00:02:41.600 pcapng: explicitly disabled via build config 00:02:41.600 rawdev: explicitly disabled via build config 00:02:41.600 regexdev: explicitly disabled via build config 00:02:41.600 mldev: explicitly disabled via build config 00:02:41.600 rib: explicitly disabled via build config 00:02:41.600 sched: explicitly disabled via build config 00:02:41.600 stack: explicitly disabled via build config 00:02:41.600 ipsec: explicitly disabled via build config 00:02:41.600 pdcp: explicitly disabled via build config 00:02:41.600 fib: explicitly disabled via build config 00:02:41.600 port: explicitly disabled via build config 00:02:41.600 pdump: explicitly disabled via build config 00:02:41.600 table: explicitly disabled via build config 00:02:41.600 pipeline: explicitly disabled via build config 00:02:41.600 graph: explicitly disabled via build config 00:02:41.600 node: explicitly disabled via build config 00:02:41.600 00:02:41.600 drivers: 00:02:41.600 common/cpt: not in enabled drivers build config 00:02:41.600 common/dpaax: not in enabled drivers build config 00:02:41.600 common/iavf: not in enabled drivers build config 00:02:41.600 common/idpf: not in enabled drivers build config 00:02:41.600 common/ionic: not in enabled drivers build config 00:02:41.600 common/mvep: not in enabled drivers build config 00:02:41.600 common/octeontx: not in enabled drivers build config 00:02:41.600 bus/auxiliary: not in enabled drivers build config 00:02:41.600 bus/cdx: not in enabled drivers build config 00:02:41.600 bus/dpaa: not in enabled drivers build config 00:02:41.600 bus/fslmc: not in enabled drivers build config 00:02:41.600 bus/ifpga: not in enabled drivers build config 00:02:41.600 bus/platform: not in enabled drivers build config 00:02:41.600 bus/uacce: not in enabled drivers build config 00:02:41.600 bus/vmbus: not in enabled drivers build config 00:02:41.600 common/cnxk: not in enabled drivers build config 00:02:41.600 common/mlx5: not in enabled drivers build config 00:02:41.600 common/nfp: not in enabled drivers build config 00:02:41.600 common/nitrox: not in enabled drivers build config 00:02:41.600 common/qat: not in enabled drivers build config 00:02:41.600 common/sfc_efx: not in enabled drivers build config 00:02:41.600 mempool/bucket: not in enabled drivers build config 00:02:41.600 mempool/cnxk: not in enabled drivers build config 00:02:41.600 mempool/dpaa: not in enabled drivers build config 00:02:41.600 mempool/dpaa2: not in enabled drivers build config 00:02:41.600 mempool/octeontx: not in enabled drivers build config 00:02:41.600 mempool/stack: not in enabled drivers build config 00:02:41.600 dma/cnxk: not in enabled drivers build config 00:02:41.600 dma/dpaa: not in enabled drivers build config 00:02:41.600 dma/dpaa2: not in enabled drivers build config 00:02:41.600 dma/hisilicon: not in enabled drivers build config 00:02:41.600 dma/idxd: not in enabled drivers build config 00:02:41.600 dma/ioat: not in enabled drivers build config 00:02:41.600 dma/skeleton: not in enabled drivers build config 00:02:41.600 net/af_packet: not in enabled drivers build config 00:02:41.600 net/af_xdp: not in enabled drivers build config 00:02:41.600 net/ark: not in enabled drivers build config 00:02:41.600 net/atlantic: not in enabled drivers build config 00:02:41.600 net/avp: not in enabled drivers build config 00:02:41.600 net/axgbe: not in enabled drivers build config 00:02:41.600 net/bnx2x: not in enabled drivers build config 00:02:41.600 net/bnxt: not in enabled drivers build config 00:02:41.600 net/bonding: not in enabled drivers build config 00:02:41.600 net/cnxk: not in enabled drivers build config 00:02:41.600 net/cpfl: not in enabled drivers build config 00:02:41.600 net/cxgbe: not in enabled drivers build config 00:02:41.600 net/dpaa: not in enabled drivers build config 00:02:41.600 net/dpaa2: not in enabled drivers build config 00:02:41.600 net/e1000: not in enabled drivers build config 00:02:41.600 net/ena: not in enabled drivers build config 00:02:41.600 net/enetc: not in enabled drivers build config 00:02:41.600 net/enetfec: not in enabled drivers build config 00:02:41.600 net/enic: not in enabled drivers build config 00:02:41.600 net/failsafe: not in enabled drivers build config 00:02:41.600 net/fm10k: not in enabled drivers build config 00:02:41.600 net/gve: not in enabled drivers build config 00:02:41.600 net/hinic: not in enabled drivers build config 00:02:41.600 net/hns3: not in enabled drivers build config 00:02:41.600 net/i40e: not in enabled drivers build config 00:02:41.600 net/iavf: not in enabled drivers build config 00:02:41.600 net/ice: not in enabled drivers build config 00:02:41.600 net/idpf: not in enabled drivers build config 00:02:41.600 net/igc: not in enabled drivers build config 00:02:41.600 net/ionic: not in enabled drivers build config 00:02:41.600 net/ipn3ke: not in enabled drivers build config 00:02:41.600 net/ixgbe: not in enabled drivers build config 00:02:41.600 net/mana: not in enabled drivers build config 00:02:41.600 net/memif: not in enabled drivers build config 00:02:41.600 net/mlx4: not in enabled drivers build config 00:02:41.600 net/mlx5: not in enabled drivers build config 00:02:41.600 net/mvneta: not in enabled drivers build config 00:02:41.600 net/mvpp2: not in enabled drivers build config 00:02:41.600 net/netvsc: not in enabled drivers build config 00:02:41.600 net/nfb: not in enabled drivers build config 00:02:41.600 net/nfp: not in enabled drivers build config 00:02:41.600 net/ngbe: not in enabled drivers build config 00:02:41.600 net/null: not in enabled drivers build config 00:02:41.600 net/octeontx: not in enabled drivers build config 00:02:41.600 net/octeon_ep: not in enabled drivers build config 00:02:41.600 net/pcap: not in enabled drivers build config 00:02:41.600 net/pfe: not in enabled drivers build config 00:02:41.600 net/qede: not in enabled drivers build config 00:02:41.600 net/ring: not in enabled drivers build config 00:02:41.600 net/sfc: not in enabled drivers build config 00:02:41.600 net/softnic: not in enabled drivers build config 00:02:41.600 net/tap: not in enabled drivers build config 00:02:41.601 net/thunderx: not in enabled drivers build config 00:02:41.601 net/txgbe: not in enabled drivers build config 00:02:41.601 net/vdev_netvsc: not in enabled drivers build config 00:02:41.601 net/vhost: not in enabled drivers build config 00:02:41.601 net/virtio: not in enabled drivers build config 00:02:41.601 net/vmxnet3: not in enabled drivers build config 00:02:41.601 raw/*: missing internal dependency, "rawdev" 00:02:41.601 crypto/armv8: not in enabled drivers build config 00:02:41.601 crypto/bcmfs: not in enabled drivers build config 00:02:41.601 crypto/caam_jr: not in enabled drivers build config 00:02:41.601 crypto/ccp: not in enabled drivers build config 00:02:41.601 crypto/cnxk: not in enabled drivers build config 00:02:41.601 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.601 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.601 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.601 crypto/mlx5: not in enabled drivers build config 00:02:41.601 crypto/mvsam: not in enabled drivers build config 00:02:41.601 crypto/nitrox: not in enabled drivers build config 00:02:41.601 crypto/null: not in enabled drivers build config 00:02:41.601 crypto/octeontx: not in enabled drivers build config 00:02:41.601 crypto/openssl: not in enabled drivers build config 00:02:41.601 crypto/scheduler: not in enabled drivers build config 00:02:41.601 crypto/uadk: not in enabled drivers build config 00:02:41.601 crypto/virtio: not in enabled drivers build config 00:02:41.601 compress/isal: not in enabled drivers build config 00:02:41.601 compress/mlx5: not in enabled drivers build config 00:02:41.601 compress/nitrox: not in enabled drivers build config 00:02:41.601 compress/octeontx: not in enabled drivers build config 00:02:41.601 compress/zlib: not in enabled drivers build config 00:02:41.601 regex/*: missing internal dependency, "regexdev" 00:02:41.601 ml/*: missing internal dependency, "mldev" 00:02:41.601 vdpa/ifc: not in enabled drivers build config 00:02:41.601 vdpa/mlx5: not in enabled drivers build config 00:02:41.601 vdpa/nfp: not in enabled drivers build config 00:02:41.601 vdpa/sfc: not in enabled drivers build config 00:02:41.601 event/*: missing internal dependency, "eventdev" 00:02:41.601 baseband/*: missing internal dependency, "bbdev" 00:02:41.601 gpu/*: missing internal dependency, "gpudev" 00:02:41.601 00:02:41.601 00:02:41.601 Build targets in project: 84 00:02:41.601 00:02:41.601 DPDK 24.03.0 00:02:41.601 00:02:41.601 User defined options 00:02:41.601 buildtype : debug 00:02:41.601 default_library : shared 00:02:41.601 libdir : lib 00:02:41.601 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.601 b_sanitize : address 00:02:41.601 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:41.601 c_link_args : 00:02:41.601 cpu_instruction_set: native 00:02:41.601 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:41.601 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:41.601 enable_docs : false 00:02:41.601 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:41.601 enable_kmods : false 00:02:41.601 max_lcores : 128 00:02:41.601 tests : false 00:02:41.601 00:02:41.601 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.601 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:41.601 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.601 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.601 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.601 [4/267] Linking static target lib/librte_log.a 00:02:41.601 [5/267] Linking static target lib/librte_kvargs.a 00:02:41.601 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.859 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.859 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.118 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.118 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.118 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.118 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.118 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.118 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.118 [15/267] Linking static target lib/librte_telemetry.a 00:02:42.118 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.118 [17/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.118 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.379 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.379 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.379 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.379 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.637 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.637 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.637 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.637 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.637 [27/267] Linking target lib/librte_log.so.24.1 00:02:42.637 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:42.637 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.637 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.637 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:42.896 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:42.896 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.896 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.896 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:42.896 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.896 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.896 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:42.896 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.896 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.155 [41/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:43.155 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.155 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.155 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.155 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.155 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.155 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.155 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.413 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.413 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.413 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.414 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.414 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.414 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.414 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.414 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.673 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.673 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.673 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.673 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.673 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.673 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.930 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.930 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.930 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.930 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.930 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.186 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.186 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.186 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.186 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.187 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.187 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.187 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.187 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.187 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.445 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.445 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.445 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.445 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.445 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.701 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.701 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.701 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.701 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.701 [86/267] Linking static target lib/librte_eal.a 00:02:44.701 [87/267] Linking static target lib/librte_ring.a 00:02:44.958 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.958 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.958 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.958 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.958 [92/267] Linking static target lib/librte_mempool.a 00:02:44.958 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.958 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.215 [95/267] Linking static target lib/librte_rcu.a 00:02:45.215 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.215 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.215 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.472 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.472 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.472 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.473 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.473 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.473 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.729 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.730 [106/267] Linking static target lib/librte_meter.a 00:02:45.730 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:45.730 [108/267] Linking static target lib/librte_net.a 00:02:45.730 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.730 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.730 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.730 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.987 [113/267] Linking static target lib/librte_mbuf.a 00:02:45.987 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.987 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.987 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.987 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.243 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.243 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.243 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.500 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.500 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:46.500 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.500 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.757 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.757 [126/267] Linking static target lib/librte_pci.a 00:02:46.758 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.758 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.758 [129/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.758 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.758 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.758 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.758 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.040 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.040 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.040 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:47.040 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.040 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.040 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.040 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.040 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:47.040 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.040 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.040 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:47.041 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.041 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.041 [147/267] Linking static target lib/librte_cmdline.a 00:02:47.304 [148/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.304 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.304 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:47.562 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.562 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.562 [153/267] Linking static target lib/librte_timer.a 00:02:47.562 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.843 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.843 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.843 [157/267] Linking static target lib/librte_ethdev.a 00:02:47.843 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.843 [159/267] Linking static target lib/librte_compressdev.a 00:02:47.843 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.843 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.102 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.102 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.102 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.102 [165/267] Linking static target lib/librte_hash.a 00:02:48.102 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.102 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.102 [168/267] Linking static target lib/librte_dmadev.a 00:02:48.360 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.360 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.360 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:48.618 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.618 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.618 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.618 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.618 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.618 [177/267] Linking static target lib/librte_cryptodev.a 00:02:48.618 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.885 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.885 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.885 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.885 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.885 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.885 [184/267] Linking static target lib/librte_power.a 00:02:49.187 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.187 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.187 [187/267] Linking static target lib/librte_reorder.a 00:02:49.187 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.187 [189/267] Linking static target lib/librte_security.a 00:02:49.187 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.187 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.446 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.446 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.011 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.011 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.011 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.011 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.011 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.270 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.270 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.270 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.270 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.529 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.529 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.529 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.529 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:50.529 [207/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.787 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:50.787 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.787 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.787 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.787 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.787 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.787 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:50.787 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.787 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.787 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.046 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:51.046 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.046 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.046 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.304 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.304 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.304 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.304 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:51.304 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.869 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:52.803 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.803 [229/267] Linking target lib/librte_eal.so.24.1 00:02:52.803 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.062 [231/267] Linking target lib/librte_ring.so.24.1 00:02:53.062 [232/267] Linking target lib/librte_pci.so.24.1 00:02:53.062 [233/267] Linking target lib/librte_timer.so.24.1 00:02:53.062 [234/267] Linking target lib/librte_meter.so.24.1 00:02:53.062 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:53.062 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.062 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.062 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.062 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.062 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.062 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.062 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.062 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:53.062 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:53.319 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.319 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.320 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.320 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:53.320 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.320 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:02:53.320 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:53.320 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:53.320 [253/267] Linking target lib/librte_net.so.24.1 00:02:53.593 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.593 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.593 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:53.593 [257/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.593 [258/267] Linking target lib/librte_hash.so.24.1 00:02:53.593 [259/267] Linking target lib/librte_security.so.24.1 00:02:53.593 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:53.593 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.851 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.851 [263/267] Linking target lib/librte_power.so.24.1 00:02:54.417 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.675 [265/267] Linking static target lib/librte_vhost.a 00:02:55.608 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.883 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:55.883 INFO: autodetecting backend as ninja 00:02:55.883 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:10.774 CC lib/ut/ut.o 00:03:10.774 CC lib/log/log.o 00:03:10.774 CC lib/log/log_flags.o 00:03:10.774 CC lib/log/log_deprecated.o 00:03:10.774 CC lib/ut_mock/mock.o 00:03:10.774 LIB libspdk_ut_mock.a 00:03:10.774 LIB libspdk_log.a 00:03:10.774 LIB libspdk_ut.a 00:03:10.774 SO libspdk_ut_mock.so.6.0 00:03:10.774 SO libspdk_ut.so.2.0 00:03:10.774 SO libspdk_log.so.7.1 00:03:10.774 SYMLINK libspdk_ut_mock.so 00:03:10.774 SYMLINK libspdk_ut.so 00:03:10.774 SYMLINK libspdk_log.so 00:03:10.774 CC lib/ioat/ioat.o 00:03:10.774 CXX lib/trace_parser/trace.o 00:03:10.774 CC lib/dma/dma.o 00:03:10.774 CC lib/util/base64.o 00:03:10.774 CC lib/util/bit_array.o 00:03:10.774 CC lib/util/cpuset.o 00:03:10.774 CC lib/util/crc16.o 00:03:10.774 CC lib/util/crc32.o 00:03:10.774 CC lib/util/crc32c.o 00:03:10.774 CC lib/vfio_user/host/vfio_user_pci.o 00:03:10.774 CC lib/util/crc32_ieee.o 00:03:10.774 CC lib/util/crc64.o 00:03:10.774 CC lib/util/dif.o 00:03:10.774 CC lib/util/fd.o 00:03:10.774 LIB libspdk_dma.a 00:03:10.774 SO libspdk_dma.so.5.0 00:03:10.774 CC lib/util/fd_group.o 00:03:10.774 CC lib/vfio_user/host/vfio_user.o 00:03:10.774 CC lib/util/file.o 00:03:10.774 CC lib/util/hexlify.o 00:03:10.774 SYMLINK libspdk_dma.so 00:03:10.774 CC lib/util/iov.o 00:03:10.774 CC lib/util/math.o 00:03:10.774 LIB libspdk_ioat.a 00:03:10.774 SO libspdk_ioat.so.7.0 00:03:10.774 CC lib/util/net.o 00:03:10.774 SYMLINK libspdk_ioat.so 00:03:10.775 CC lib/util/pipe.o 00:03:10.775 LIB libspdk_vfio_user.a 00:03:10.775 CC lib/util/strerror_tls.o 00:03:10.775 CC lib/util/string.o 00:03:10.775 SO libspdk_vfio_user.so.5.0 00:03:10.775 CC lib/util/uuid.o 00:03:10.775 CC lib/util/xor.o 00:03:10.775 CC lib/util/zipf.o 00:03:10.775 SYMLINK libspdk_vfio_user.so 00:03:10.775 CC lib/util/md5.o 00:03:10.775 LIB libspdk_util.a 00:03:10.775 SO libspdk_util.so.10.1 00:03:10.775 LIB libspdk_trace_parser.a 00:03:10.775 SO libspdk_trace_parser.so.6.0 00:03:10.775 SYMLINK libspdk_util.so 00:03:10.775 SYMLINK libspdk_trace_parser.so 00:03:10.775 CC lib/idxd/idxd.o 00:03:10.775 CC lib/json/json_parse.o 00:03:10.775 CC lib/idxd/idxd_user.o 00:03:10.775 CC lib/json/json_util.o 00:03:10.775 CC lib/idxd/idxd_kernel.o 00:03:10.775 CC lib/json/json_write.o 00:03:10.775 CC lib/rdma_utils/rdma_utils.o 00:03:10.775 CC lib/env_dpdk/env.o 00:03:10.775 CC lib/vmd/vmd.o 00:03:10.775 CC lib/conf/conf.o 00:03:10.775 CC lib/vmd/led.o 00:03:10.775 LIB libspdk_conf.a 00:03:10.775 SO libspdk_conf.so.6.0 00:03:10.775 CC lib/env_dpdk/memory.o 00:03:10.775 SYMLINK libspdk_conf.so 00:03:10.775 CC lib/env_dpdk/pci.o 00:03:10.775 CC lib/env_dpdk/init.o 00:03:10.775 CC lib/env_dpdk/threads.o 00:03:10.775 LIB libspdk_rdma_utils.a 00:03:10.775 SO libspdk_rdma_utils.so.1.0 00:03:10.775 LIB libspdk_json.a 00:03:10.775 CC lib/env_dpdk/pci_ioat.o 00:03:10.775 SO libspdk_json.so.6.0 00:03:10.775 SYMLINK libspdk_rdma_utils.so 00:03:10.775 CC lib/env_dpdk/pci_virtio.o 00:03:10.775 SYMLINK libspdk_json.so 00:03:10.775 CC lib/env_dpdk/pci_vmd.o 00:03:10.775 CC lib/env_dpdk/pci_idxd.o 00:03:10.775 CC lib/env_dpdk/pci_event.o 00:03:10.775 CC lib/env_dpdk/sigbus_handler.o 00:03:10.775 CC lib/rdma_provider/common.o 00:03:10.775 CC lib/env_dpdk/pci_dpdk.o 00:03:10.775 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:10.775 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:10.775 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:10.775 LIB libspdk_idxd.a 00:03:10.775 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.775 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.775 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.775 SO libspdk_idxd.so.12.1 00:03:10.775 LIB libspdk_vmd.a 00:03:10.775 SO libspdk_vmd.so.6.0 00:03:10.775 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.775 SYMLINK libspdk_idxd.so 00:03:10.775 LIB libspdk_rdma_provider.a 00:03:10.775 SYMLINK libspdk_vmd.so 00:03:10.775 SO libspdk_rdma_provider.so.7.0 00:03:10.775 SYMLINK libspdk_rdma_provider.so 00:03:10.775 LIB libspdk_jsonrpc.a 00:03:11.033 SO libspdk_jsonrpc.so.6.0 00:03:11.033 SYMLINK libspdk_jsonrpc.so 00:03:11.292 CC lib/rpc/rpc.o 00:03:11.292 LIB libspdk_env_dpdk.a 00:03:11.548 LIB libspdk_rpc.a 00:03:11.548 SO libspdk_rpc.so.6.0 00:03:11.548 SO libspdk_env_dpdk.so.15.1 00:03:11.548 SYMLINK libspdk_rpc.so 00:03:11.548 SYMLINK libspdk_env_dpdk.so 00:03:11.548 CC lib/notify/notify.o 00:03:11.548 CC lib/keyring/keyring.o 00:03:11.548 CC lib/keyring/keyring_rpc.o 00:03:11.548 CC lib/notify/notify_rpc.o 00:03:11.548 CC lib/trace/trace_flags.o 00:03:11.548 CC lib/trace/trace_rpc.o 00:03:11.548 CC lib/trace/trace.o 00:03:11.806 LIB libspdk_notify.a 00:03:11.806 SO libspdk_notify.so.6.0 00:03:11.806 SYMLINK libspdk_notify.so 00:03:11.806 LIB libspdk_keyring.a 00:03:11.806 LIB libspdk_trace.a 00:03:11.806 SO libspdk_keyring.so.2.0 00:03:12.064 SO libspdk_trace.so.11.0 00:03:12.064 SYMLINK libspdk_keyring.so 00:03:12.064 SYMLINK libspdk_trace.so 00:03:12.321 CC lib/thread/iobuf.o 00:03:12.321 CC lib/thread/thread.o 00:03:12.321 CC lib/sock/sock.o 00:03:12.321 CC lib/sock/sock_rpc.o 00:03:12.579 LIB libspdk_sock.a 00:03:12.579 SO libspdk_sock.so.10.0 00:03:12.579 SYMLINK libspdk_sock.so 00:03:12.837 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:12.837 CC lib/nvme/nvme_fabric.o 00:03:12.837 CC lib/nvme/nvme_ctrlr.o 00:03:12.838 CC lib/nvme/nvme_ns_cmd.o 00:03:12.838 CC lib/nvme/nvme_ns.o 00:03:12.838 CC lib/nvme/nvme_pcie_common.o 00:03:12.838 CC lib/nvme/nvme_pcie.o 00:03:12.838 CC lib/nvme/nvme_qpair.o 00:03:12.838 CC lib/nvme/nvme.o 00:03:13.403 CC lib/nvme/nvme_quirks.o 00:03:13.403 CC lib/nvme/nvme_transport.o 00:03:13.403 CC lib/nvme/nvme_discovery.o 00:03:13.403 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.662 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:13.662 CC lib/nvme/nvme_tcp.o 00:03:13.662 CC lib/nvme/nvme_opal.o 00:03:13.662 LIB libspdk_thread.a 00:03:13.662 SO libspdk_thread.so.11.0 00:03:13.919 SYMLINK libspdk_thread.so 00:03:13.919 CC lib/nvme/nvme_io_msg.o 00:03:13.919 CC lib/nvme/nvme_poll_group.o 00:03:13.919 CC lib/nvme/nvme_zns.o 00:03:13.919 CC lib/nvme/nvme_stubs.o 00:03:13.919 CC lib/nvme/nvme_auth.o 00:03:13.919 CC lib/nvme/nvme_cuse.o 00:03:13.919 CC lib/nvme/nvme_rdma.o 00:03:14.486 CC lib/accel/accel.o 00:03:14.486 CC lib/accel/accel_rpc.o 00:03:14.486 CC lib/accel/accel_sw.o 00:03:14.486 CC lib/blob/blobstore.o 00:03:14.486 CC lib/init/json_config.o 00:03:14.743 CC lib/init/subsystem.o 00:03:14.743 CC lib/virtio/virtio.o 00:03:14.743 CC lib/virtio/virtio_vhost_user.o 00:03:14.743 CC lib/virtio/virtio_vfio_user.o 00:03:14.743 CC lib/init/subsystem_rpc.o 00:03:14.743 CC lib/init/rpc.o 00:03:15.001 CC lib/virtio/virtio_pci.o 00:03:15.001 CC lib/blob/request.o 00:03:15.001 LIB libspdk_init.a 00:03:15.001 CC lib/blob/zeroes.o 00:03:15.001 SO libspdk_init.so.6.0 00:03:15.001 CC lib/blob/blob_bs_dev.o 00:03:15.001 CC lib/fsdev/fsdev.o 00:03:15.001 SYMLINK libspdk_init.so 00:03:15.001 CC lib/fsdev/fsdev_io.o 00:03:15.001 LIB libspdk_virtio.a 00:03:15.260 SO libspdk_virtio.so.7.0 00:03:15.260 CC lib/fsdev/fsdev_rpc.o 00:03:15.260 SYMLINK libspdk_virtio.so 00:03:15.260 LIB libspdk_accel.a 00:03:15.260 SO libspdk_accel.so.16.0 00:03:15.260 CC lib/event/app.o 00:03:15.260 SYMLINK libspdk_accel.so 00:03:15.260 CC lib/event/reactor.o 00:03:15.260 CC lib/event/log_rpc.o 00:03:15.260 CC lib/event/app_rpc.o 00:03:15.260 LIB libspdk_nvme.a 00:03:15.260 CC lib/event/scheduler_static.o 00:03:15.519 SO libspdk_nvme.so.15.0 00:03:15.519 CC lib/bdev/bdev_rpc.o 00:03:15.519 CC lib/bdev/part.o 00:03:15.519 CC lib/bdev/bdev.o 00:03:15.519 CC lib/bdev/bdev_zone.o 00:03:15.519 CC lib/bdev/scsi_nvme.o 00:03:15.777 LIB libspdk_fsdev.a 00:03:15.777 SO libspdk_fsdev.so.2.0 00:03:15.777 SYMLINK libspdk_nvme.so 00:03:15.777 SYMLINK libspdk_fsdev.so 00:03:15.777 LIB libspdk_event.a 00:03:15.778 SO libspdk_event.so.14.0 00:03:16.040 SYMLINK libspdk_event.so 00:03:16.040 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:16.605 LIB libspdk_fuse_dispatcher.a 00:03:16.605 SO libspdk_fuse_dispatcher.so.1.0 00:03:16.605 SYMLINK libspdk_fuse_dispatcher.so 00:03:17.539 LIB libspdk_blob.a 00:03:17.539 SO libspdk_blob.so.12.0 00:03:17.539 LIB libspdk_bdev.a 00:03:17.798 SYMLINK libspdk_blob.so 00:03:17.798 SO libspdk_bdev.so.17.0 00:03:17.798 SYMLINK libspdk_bdev.so 00:03:17.798 CC lib/lvol/lvol.o 00:03:17.798 CC lib/blobfs/blobfs.o 00:03:17.798 CC lib/blobfs/tree.o 00:03:18.056 CC lib/nbd/nbd_rpc.o 00:03:18.056 CC lib/nbd/nbd.o 00:03:18.056 CC lib/scsi/dev.o 00:03:18.056 CC lib/scsi/lun.o 00:03:18.056 CC lib/ublk/ublk.o 00:03:18.056 CC lib/nvmf/ctrlr.o 00:03:18.056 CC lib/ftl/ftl_core.o 00:03:18.056 CC lib/ftl/ftl_init.o 00:03:18.056 CC lib/ftl/ftl_layout.o 00:03:18.056 CC lib/scsi/port.o 00:03:18.056 CC lib/ublk/ublk_rpc.o 00:03:18.315 CC lib/ftl/ftl_debug.o 00:03:18.315 CC lib/scsi/scsi.o 00:03:18.315 CC lib/ftl/ftl_io.o 00:03:18.315 CC lib/ftl/ftl_sb.o 00:03:18.315 LIB libspdk_nbd.a 00:03:18.315 CC lib/scsi/scsi_bdev.o 00:03:18.315 SO libspdk_nbd.so.7.0 00:03:18.315 SYMLINK libspdk_nbd.so 00:03:18.315 CC lib/nvmf/ctrlr_discovery.o 00:03:18.315 CC lib/nvmf/ctrlr_bdev.o 00:03:18.315 CC lib/ftl/ftl_l2p.o 00:03:18.574 CC lib/nvmf/subsystem.o 00:03:18.574 LIB libspdk_blobfs.a 00:03:18.574 SO libspdk_blobfs.so.11.0 00:03:18.574 CC lib/ftl/ftl_l2p_flat.o 00:03:18.574 LIB libspdk_ublk.a 00:03:18.574 SYMLINK libspdk_blobfs.so 00:03:18.574 CC lib/ftl/ftl_nv_cache.o 00:03:18.574 SO libspdk_ublk.so.3.0 00:03:18.574 CC lib/ftl/ftl_band.o 00:03:18.574 SYMLINK libspdk_ublk.so 00:03:18.574 CC lib/scsi/scsi_pr.o 00:03:18.833 CC lib/nvmf/nvmf.o 00:03:18.833 LIB libspdk_lvol.a 00:03:18.833 SO libspdk_lvol.so.11.0 00:03:18.833 CC lib/nvmf/nvmf_rpc.o 00:03:18.833 CC lib/nvmf/transport.o 00:03:18.833 SYMLINK libspdk_lvol.so 00:03:18.833 CC lib/scsi/scsi_rpc.o 00:03:18.833 CC lib/scsi/task.o 00:03:19.092 CC lib/ftl/ftl_band_ops.o 00:03:19.092 CC lib/ftl/ftl_writer.o 00:03:19.092 CC lib/ftl/ftl_rq.o 00:03:19.092 LIB libspdk_scsi.a 00:03:19.092 SO libspdk_scsi.so.9.0 00:03:19.092 CC lib/ftl/ftl_reloc.o 00:03:19.092 SYMLINK libspdk_scsi.so 00:03:19.092 CC lib/nvmf/tcp.o 00:03:19.350 CC lib/nvmf/stubs.o 00:03:19.350 CC lib/ftl/ftl_l2p_cache.o 00:03:19.350 CC lib/iscsi/conn.o 00:03:19.609 CC lib/iscsi/init_grp.o 00:03:19.609 CC lib/iscsi/iscsi.o 00:03:19.609 CC lib/iscsi/param.o 00:03:19.609 CC lib/iscsi/portal_grp.o 00:03:19.609 CC lib/iscsi/tgt_node.o 00:03:19.867 CC lib/ftl/ftl_p2l.o 00:03:19.867 CC lib/ftl/ftl_p2l_log.o 00:03:19.867 CC lib/ftl/mngt/ftl_mngt.o 00:03:19.867 CC lib/vhost/vhost.o 00:03:19.867 CC lib/vhost/vhost_rpc.o 00:03:19.867 CC lib/vhost/vhost_scsi.o 00:03:20.124 CC lib/iscsi/iscsi_subsystem.o 00:03:20.124 CC lib/iscsi/iscsi_rpc.o 00:03:20.124 CC lib/iscsi/task.o 00:03:20.124 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.124 CC lib/nvmf/mdns_server.o 00:03:20.381 CC lib/vhost/vhost_blk.o 00:03:20.381 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:20.381 CC lib/vhost/rte_vhost_user.o 00:03:20.381 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.381 CC lib/nvmf/rdma.o 00:03:20.381 CC lib/nvmf/auth.o 00:03:20.381 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.381 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.638 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:20.638 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:20.638 LIB libspdk_iscsi.a 00:03:20.638 SO libspdk_iscsi.so.8.0 00:03:20.638 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:20.638 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:20.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:20.638 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:20.897 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:20.897 SYMLINK libspdk_iscsi.so 00:03:20.897 CC lib/ftl/utils/ftl_conf.o 00:03:20.897 CC lib/ftl/utils/ftl_md.o 00:03:20.897 CC lib/ftl/utils/ftl_mempool.o 00:03:20.897 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.897 CC lib/ftl/utils/ftl_property.o 00:03:20.897 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:21.155 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:21.155 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:21.155 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:21.155 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:21.155 LIB libspdk_vhost.a 00:03:21.155 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:21.155 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:21.155 SO libspdk_vhost.so.8.0 00:03:21.155 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:21.155 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:21.155 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:21.155 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:21.155 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:21.155 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:21.155 SYMLINK libspdk_vhost.so 00:03:21.155 CC lib/ftl/base/ftl_base_dev.o 00:03:21.155 CC lib/ftl/base/ftl_base_bdev.o 00:03:21.414 CC lib/ftl/ftl_trace.o 00:03:21.414 LIB libspdk_ftl.a 00:03:21.671 SO libspdk_ftl.so.9.0 00:03:21.929 SYMLINK libspdk_ftl.so 00:03:22.194 LIB libspdk_nvmf.a 00:03:22.453 SO libspdk_nvmf.so.20.0 00:03:22.712 SYMLINK libspdk_nvmf.so 00:03:22.970 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.970 CC module/accel/iaa/accel_iaa.o 00:03:22.970 CC module/accel/ioat/accel_ioat.o 00:03:22.971 CC module/accel/error/accel_error.o 00:03:22.971 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.971 CC module/sock/posix/posix.o 00:03:22.971 CC module/fsdev/aio/fsdev_aio.o 00:03:22.971 CC module/accel/dsa/accel_dsa.o 00:03:22.971 CC module/keyring/file/keyring.o 00:03:22.971 CC module/blob/bdev/blob_bdev.o 00:03:22.971 LIB libspdk_env_dpdk_rpc.a 00:03:22.971 SO libspdk_env_dpdk_rpc.so.6.0 00:03:22.971 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.971 CC module/keyring/file/keyring_rpc.o 00:03:22.971 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.971 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.971 LIB libspdk_scheduler_dynamic.a 00:03:22.971 CC module/accel/dsa/accel_dsa_rpc.o 00:03:22.971 SO libspdk_scheduler_dynamic.so.4.0 00:03:23.229 CC module/accel/error/accel_error_rpc.o 00:03:23.229 SYMLINK libspdk_scheduler_dynamic.so 00:03:23.229 LIB libspdk_accel_iaa.a 00:03:23.229 LIB libspdk_keyring_file.a 00:03:23.229 SO libspdk_accel_iaa.so.3.0 00:03:23.229 SO libspdk_keyring_file.so.2.0 00:03:23.229 LIB libspdk_accel_ioat.a 00:03:23.229 LIB libspdk_blob_bdev.a 00:03:23.229 SYMLINK libspdk_accel_iaa.so 00:03:23.229 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:23.229 SO libspdk_accel_ioat.so.6.0 00:03:23.229 SYMLINK libspdk_keyring_file.so 00:03:23.229 SO libspdk_blob_bdev.so.12.0 00:03:23.229 LIB libspdk_accel_dsa.a 00:03:23.229 CC module/fsdev/aio/linux_aio_mgr.o 00:03:23.229 SO libspdk_accel_dsa.so.5.0 00:03:23.229 LIB libspdk_accel_error.a 00:03:23.229 SYMLINK libspdk_accel_ioat.so 00:03:23.229 SYMLINK libspdk_blob_bdev.so 00:03:23.229 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:23.229 SO libspdk_accel_error.so.2.0 00:03:23.229 SYMLINK libspdk_accel_dsa.so 00:03:23.229 CC module/keyring/linux/keyring.o 00:03:23.229 SYMLINK libspdk_accel_error.so 00:03:23.229 CC module/keyring/linux/keyring_rpc.o 00:03:23.487 CC module/scheduler/gscheduler/gscheduler.o 00:03:23.487 LIB libspdk_scheduler_dpdk_governor.a 00:03:23.487 LIB libspdk_keyring_linux.a 00:03:23.487 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:23.487 CC module/bdev/delay/vbdev_delay.o 00:03:23.487 SO libspdk_keyring_linux.so.1.0 00:03:23.487 LIB libspdk_scheduler_gscheduler.a 00:03:23.487 CC module/blobfs/bdev/blobfs_bdev.o 00:03:23.487 CC module/bdev/error/vbdev_error.o 00:03:23.487 SO libspdk_scheduler_gscheduler.so.4.0 00:03:23.487 CC module/bdev/gpt/gpt.o 00:03:23.487 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:23.487 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:23.487 SYMLINK libspdk_keyring_linux.so 00:03:23.487 SYMLINK libspdk_scheduler_gscheduler.so 00:03:23.487 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.487 LIB libspdk_sock_posix.a 00:03:23.487 SO libspdk_sock_posix.so.6.0 00:03:23.488 LIB libspdk_fsdev_aio.a 00:03:23.746 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.747 SO libspdk_fsdev_aio.so.1.0 00:03:23.747 LIB libspdk_blobfs_bdev.a 00:03:23.747 SYMLINK libspdk_sock_posix.so 00:03:23.747 CC module/bdev/malloc/bdev_malloc.o 00:03:23.747 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.747 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.747 SO libspdk_blobfs_bdev.so.6.0 00:03:23.747 CC module/bdev/null/bdev_null.o 00:03:23.747 SYMLINK libspdk_fsdev_aio.so 00:03:23.747 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.747 SYMLINK libspdk_blobfs_bdev.so 00:03:23.747 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:24.005 CC module/bdev/nvme/bdev_nvme.o 00:03:24.005 LIB libspdk_bdev_error.a 00:03:24.005 LIB libspdk_bdev_delay.a 00:03:24.005 CC module/bdev/passthru/vbdev_passthru.o 00:03:24.005 SO libspdk_bdev_error.so.6.0 00:03:24.005 SO libspdk_bdev_delay.so.6.0 00:03:24.005 CC module/bdev/raid/bdev_raid.o 00:03:24.005 LIB libspdk_bdev_gpt.a 00:03:24.005 CC module/bdev/null/bdev_null_rpc.o 00:03:24.005 SO libspdk_bdev_gpt.so.6.0 00:03:24.005 SYMLINK libspdk_bdev_error.so 00:03:24.005 SYMLINK libspdk_bdev_delay.so 00:03:24.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:24.005 CC module/bdev/raid/bdev_raid_rpc.o 00:03:24.005 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:24.005 SYMLINK libspdk_bdev_gpt.so 00:03:24.005 LIB libspdk_bdev_malloc.a 00:03:24.005 LIB libspdk_bdev_lvol.a 00:03:24.005 SO libspdk_bdev_malloc.so.6.0 00:03:24.005 SO libspdk_bdev_lvol.so.6.0 00:03:24.005 LIB libspdk_bdev_null.a 00:03:24.263 SO libspdk_bdev_null.so.6.0 00:03:24.263 SYMLINK libspdk_bdev_malloc.so 00:03:24.263 LIB libspdk_bdev_passthru.a 00:03:24.263 CC module/bdev/raid/bdev_raid_sb.o 00:03:24.263 CC module/bdev/split/vbdev_split.o 00:03:24.263 SYMLINK libspdk_bdev_lvol.so 00:03:24.263 CC module/bdev/split/vbdev_split_rpc.o 00:03:24.263 CC module/bdev/nvme/nvme_rpc.o 00:03:24.263 CC module/bdev/nvme/bdev_mdns_client.o 00:03:24.263 SO libspdk_bdev_passthru.so.6.0 00:03:24.263 SYMLINK libspdk_bdev_null.so 00:03:24.263 CC module/bdev/nvme/vbdev_opal.o 00:03:24.263 SYMLINK libspdk_bdev_passthru.so 00:03:24.263 CC module/bdev/raid/raid0.o 00:03:24.263 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.263 CC module/bdev/raid/raid1.o 00:03:24.263 CC module/bdev/raid/concat.o 00:03:24.263 LIB libspdk_bdev_split.a 00:03:24.522 SO libspdk_bdev_split.so.6.0 00:03:24.522 SYMLINK libspdk_bdev_split.so 00:03:24.522 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:24.522 CC module/bdev/xnvme/bdev_xnvme.o 00:03:24.522 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:24.522 CC module/bdev/aio/bdev_aio.o 00:03:24.522 CC module/bdev/ftl/bdev_ftl.o 00:03:24.522 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.522 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.781 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.781 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.781 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.781 LIB libspdk_bdev_raid.a 00:03:24.781 LIB libspdk_bdev_xnvme.a 00:03:24.781 SO libspdk_bdev_raid.so.6.0 00:03:24.781 SO libspdk_bdev_xnvme.so.3.0 00:03:24.781 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.781 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.781 LIB libspdk_bdev_ftl.a 00:03:24.781 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.781 LIB libspdk_bdev_iscsi.a 00:03:24.781 SYMLINK libspdk_bdev_raid.so 00:03:25.039 SO libspdk_bdev_ftl.so.6.0 00:03:25.039 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:25.039 SO libspdk_bdev_iscsi.so.6.0 00:03:25.039 SYMLINK libspdk_bdev_xnvme.so 00:03:25.039 SYMLINK libspdk_bdev_iscsi.so 00:03:25.039 SYMLINK libspdk_bdev_ftl.so 00:03:25.039 LIB libspdk_bdev_aio.a 00:03:25.039 LIB libspdk_bdev_zone_block.a 00:03:25.039 SO libspdk_bdev_aio.so.6.0 00:03:25.039 SO libspdk_bdev_zone_block.so.6.0 00:03:25.039 SYMLINK libspdk_bdev_aio.so 00:03:25.039 SYMLINK libspdk_bdev_zone_block.so 00:03:25.298 LIB libspdk_bdev_virtio.a 00:03:25.298 SO libspdk_bdev_virtio.so.6.0 00:03:25.298 SYMLINK libspdk_bdev_virtio.so 00:03:26.231 LIB libspdk_bdev_nvme.a 00:03:26.231 SO libspdk_bdev_nvme.so.7.1 00:03:26.489 SYMLINK libspdk_bdev_nvme.so 00:03:26.746 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:26.746 CC module/event/subsystems/keyring/keyring.o 00:03:26.746 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:26.746 CC module/event/subsystems/iobuf/iobuf.o 00:03:26.746 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:26.746 CC module/event/subsystems/vmd/vmd.o 00:03:26.746 CC module/event/subsystems/sock/sock.o 00:03:26.746 CC module/event/subsystems/fsdev/fsdev.o 00:03:26.746 CC module/event/subsystems/scheduler/scheduler.o 00:03:27.005 LIB libspdk_event_sock.a 00:03:27.005 LIB libspdk_event_keyring.a 00:03:27.005 LIB libspdk_event_scheduler.a 00:03:27.005 LIB libspdk_event_vhost_blk.a 00:03:27.005 LIB libspdk_event_fsdev.a 00:03:27.005 LIB libspdk_event_vmd.a 00:03:27.005 SO libspdk_event_sock.so.5.0 00:03:27.005 SO libspdk_event_scheduler.so.4.0 00:03:27.005 SO libspdk_event_keyring.so.1.0 00:03:27.005 SO libspdk_event_vhost_blk.so.3.0 00:03:27.005 SO libspdk_event_fsdev.so.1.0 00:03:27.005 SO libspdk_event_vmd.so.6.0 00:03:27.005 LIB libspdk_event_iobuf.a 00:03:27.005 SYMLINK libspdk_event_scheduler.so 00:03:27.005 SYMLINK libspdk_event_keyring.so 00:03:27.005 SYMLINK libspdk_event_sock.so 00:03:27.005 SYMLINK libspdk_event_fsdev.so 00:03:27.005 SO libspdk_event_iobuf.so.3.0 00:03:27.005 SYMLINK libspdk_event_vhost_blk.so 00:03:27.005 SYMLINK libspdk_event_vmd.so 00:03:27.005 SYMLINK libspdk_event_iobuf.so 00:03:27.262 CC module/event/subsystems/accel/accel.o 00:03:27.262 LIB libspdk_event_accel.a 00:03:27.262 SO libspdk_event_accel.so.6.0 00:03:27.520 SYMLINK libspdk_event_accel.so 00:03:27.778 CC module/event/subsystems/bdev/bdev.o 00:03:27.778 LIB libspdk_event_bdev.a 00:03:27.778 SO libspdk_event_bdev.so.6.0 00:03:27.778 SYMLINK libspdk_event_bdev.so 00:03:28.036 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:28.036 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:28.036 CC module/event/subsystems/nbd/nbd.o 00:03:28.036 CC module/event/subsystems/scsi/scsi.o 00:03:28.036 CC module/event/subsystems/ublk/ublk.o 00:03:28.036 LIB libspdk_event_nbd.a 00:03:28.036 LIB libspdk_event_ublk.a 00:03:28.293 LIB libspdk_event_scsi.a 00:03:28.293 SO libspdk_event_nbd.so.6.0 00:03:28.294 SO libspdk_event_ublk.so.3.0 00:03:28.294 SO libspdk_event_scsi.so.6.0 00:03:28.294 SYMLINK libspdk_event_ublk.so 00:03:28.294 SYMLINK libspdk_event_nbd.so 00:03:28.294 LIB libspdk_event_nvmf.a 00:03:28.294 SYMLINK libspdk_event_scsi.so 00:03:28.294 SO libspdk_event_nvmf.so.6.0 00:03:28.294 SYMLINK libspdk_event_nvmf.so 00:03:28.294 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:28.554 CC module/event/subsystems/iscsi/iscsi.o 00:03:28.554 LIB libspdk_event_vhost_scsi.a 00:03:28.554 LIB libspdk_event_iscsi.a 00:03:28.554 SO libspdk_event_vhost_scsi.so.3.0 00:03:28.554 SO libspdk_event_iscsi.so.6.0 00:03:28.554 SYMLINK libspdk_event_vhost_scsi.so 00:03:28.554 SYMLINK libspdk_event_iscsi.so 00:03:28.814 SO libspdk.so.6.0 00:03:28.814 SYMLINK libspdk.so 00:03:28.814 CXX app/trace/trace.o 00:03:28.814 CC app/spdk_lspci/spdk_lspci.o 00:03:28.814 CC app/trace_record/trace_record.o 00:03:29.072 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.072 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:29.072 CC test/thread/poller_perf/poller_perf.o 00:03:29.072 CC app/nvmf_tgt/nvmf_main.o 00:03:29.072 CC examples/util/zipf/zipf.o 00:03:29.072 CC examples/ioat/perf/perf.o 00:03:29.072 CC app/spdk_tgt/spdk_tgt.o 00:03:29.072 LINK spdk_lspci 00:03:29.072 LINK poller_perf 00:03:29.072 LINK interrupt_tgt 00:03:29.072 LINK spdk_trace_record 00:03:29.072 LINK iscsi_tgt 00:03:29.072 LINK zipf 00:03:29.329 LINK ioat_perf 00:03:29.329 LINK nvmf_tgt 00:03:29.329 LINK spdk_tgt 00:03:29.329 CC app/spdk_nvme_perf/perf.o 00:03:29.329 LINK spdk_trace 00:03:29.329 CC examples/ioat/verify/verify.o 00:03:29.329 CC app/spdk_nvme_identify/identify.o 00:03:29.329 CC test/dma/test_dma/test_dma.o 00:03:29.329 TEST_HEADER include/spdk/accel.h 00:03:29.329 TEST_HEADER include/spdk/accel_module.h 00:03:29.329 TEST_HEADER include/spdk/assert.h 00:03:29.329 TEST_HEADER include/spdk/barrier.h 00:03:29.329 TEST_HEADER include/spdk/base64.h 00:03:29.329 TEST_HEADER include/spdk/bdev.h 00:03:29.329 TEST_HEADER include/spdk/bdev_module.h 00:03:29.329 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.329 TEST_HEADER include/spdk/bit_array.h 00:03:29.329 TEST_HEADER include/spdk/bit_pool.h 00:03:29.329 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.329 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.329 TEST_HEADER include/spdk/blobfs.h 00:03:29.329 TEST_HEADER include/spdk/blob.h 00:03:29.329 TEST_HEADER include/spdk/conf.h 00:03:29.329 TEST_HEADER include/spdk/config.h 00:03:29.329 TEST_HEADER include/spdk/cpuset.h 00:03:29.329 TEST_HEADER include/spdk/crc16.h 00:03:29.329 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.329 TEST_HEADER include/spdk/crc32.h 00:03:29.329 TEST_HEADER include/spdk/crc64.h 00:03:29.329 TEST_HEADER include/spdk/dif.h 00:03:29.588 TEST_HEADER include/spdk/dma.h 00:03:29.588 TEST_HEADER include/spdk/endian.h 00:03:29.588 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.588 TEST_HEADER include/spdk/env.h 00:03:29.588 TEST_HEADER include/spdk/event.h 00:03:29.588 TEST_HEADER include/spdk/fd_group.h 00:03:29.588 TEST_HEADER include/spdk/fd.h 00:03:29.588 TEST_HEADER include/spdk/file.h 00:03:29.588 TEST_HEADER include/spdk/fsdev.h 00:03:29.588 TEST_HEADER include/spdk/fsdev_module.h 00:03:29.588 TEST_HEADER include/spdk/ftl.h 00:03:29.588 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:29.588 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.588 TEST_HEADER include/spdk/hexlify.h 00:03:29.588 TEST_HEADER include/spdk/histogram_data.h 00:03:29.588 TEST_HEADER include/spdk/idxd.h 00:03:29.588 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.588 CC test/app/bdev_svc/bdev_svc.o 00:03:29.588 TEST_HEADER include/spdk/init.h 00:03:29.588 TEST_HEADER include/spdk/ioat.h 00:03:29.588 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.589 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.589 TEST_HEADER include/spdk/json.h 00:03:29.589 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.589 TEST_HEADER include/spdk/keyring.h 00:03:29.589 TEST_HEADER include/spdk/keyring_module.h 00:03:29.589 TEST_HEADER include/spdk/likely.h 00:03:29.589 CC examples/thread/thread/thread_ex.o 00:03:29.589 TEST_HEADER include/spdk/log.h 00:03:29.589 TEST_HEADER include/spdk/lvol.h 00:03:29.589 TEST_HEADER include/spdk/md5.h 00:03:29.589 CC examples/sock/hello_world/hello_sock.o 00:03:29.589 TEST_HEADER include/spdk/memory.h 00:03:29.589 TEST_HEADER include/spdk/mmio.h 00:03:29.589 TEST_HEADER include/spdk/nbd.h 00:03:29.589 TEST_HEADER include/spdk/net.h 00:03:29.589 TEST_HEADER include/spdk/notify.h 00:03:29.589 TEST_HEADER include/spdk/nvme.h 00:03:29.589 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.589 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.589 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.589 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.589 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.589 LINK verify 00:03:29.589 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.589 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.589 TEST_HEADER include/spdk/nvmf.h 00:03:29.589 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.589 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.589 TEST_HEADER include/spdk/opal.h 00:03:29.589 TEST_HEADER include/spdk/opal_spec.h 00:03:29.589 TEST_HEADER include/spdk/pci_ids.h 00:03:29.589 TEST_HEADER include/spdk/pipe.h 00:03:29.589 TEST_HEADER include/spdk/queue.h 00:03:29.589 CC app/spdk_top/spdk_top.o 00:03:29.589 TEST_HEADER include/spdk/reduce.h 00:03:29.589 TEST_HEADER include/spdk/rpc.h 00:03:29.589 TEST_HEADER include/spdk/scheduler.h 00:03:29.589 TEST_HEADER include/spdk/scsi.h 00:03:29.589 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.589 TEST_HEADER include/spdk/sock.h 00:03:29.589 TEST_HEADER include/spdk/stdinc.h 00:03:29.589 TEST_HEADER include/spdk/string.h 00:03:29.589 TEST_HEADER include/spdk/thread.h 00:03:29.589 TEST_HEADER include/spdk/trace.h 00:03:29.589 TEST_HEADER include/spdk/trace_parser.h 00:03:29.589 TEST_HEADER include/spdk/tree.h 00:03:29.589 TEST_HEADER include/spdk/ublk.h 00:03:29.589 TEST_HEADER include/spdk/util.h 00:03:29.589 TEST_HEADER include/spdk/uuid.h 00:03:29.589 TEST_HEADER include/spdk/version.h 00:03:29.589 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.589 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.589 TEST_HEADER include/spdk/vhost.h 00:03:29.589 TEST_HEADER include/spdk/vmd.h 00:03:29.589 TEST_HEADER include/spdk/xor.h 00:03:29.589 TEST_HEADER include/spdk/zipf.h 00:03:29.589 CXX test/cpp_headers/accel.o 00:03:29.589 LINK spdk_nvme_discover 00:03:29.589 CXX test/cpp_headers/accel_module.o 00:03:29.589 LINK bdev_svc 00:03:29.855 LINK thread 00:03:29.855 LINK hello_sock 00:03:29.855 CXX test/cpp_headers/assert.o 00:03:29.855 CC examples/idxd/perf/perf.o 00:03:29.855 CC examples/vmd/lsvmd/lsvmd.o 00:03:29.855 CXX test/cpp_headers/barrier.o 00:03:29.855 LINK test_dma 00:03:29.855 CXX test/cpp_headers/base64.o 00:03:29.855 LINK lsvmd 00:03:29.855 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:30.112 CXX test/cpp_headers/bdev.o 00:03:30.112 CC examples/nvme/hello_world/hello_world.o 00:03:30.112 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:30.112 CC examples/vmd/led/led.o 00:03:30.113 LINK idxd_perf 00:03:30.113 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.113 LINK spdk_nvme_perf 00:03:30.113 LINK spdk_nvme_identify 00:03:30.113 CXX test/cpp_headers/bdev_module.o 00:03:30.113 LINK hello_world 00:03:30.369 LINK led 00:03:30.369 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.370 LINK nvme_fuzz 00:03:30.370 CC examples/nvme/reconnect/reconnect.o 00:03:30.370 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.370 LINK spdk_top 00:03:30.370 CXX test/cpp_headers/bdev_zone.o 00:03:30.370 CXX test/cpp_headers/bit_array.o 00:03:30.370 CC test/env/mem_callbacks/mem_callbacks.o 00:03:30.626 CC app/vhost/vhost.o 00:03:30.626 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:30.626 CXX test/cpp_headers/bit_pool.o 00:03:30.626 LINK vhost_fuzz 00:03:30.626 CC app/spdk_dd/spdk_dd.o 00:03:30.626 CC examples/accel/perf/accel_perf.o 00:03:30.626 LINK vhost 00:03:30.626 CXX test/cpp_headers/blob_bdev.o 00:03:30.626 CXX test/cpp_headers/blobfs_bdev.o 00:03:30.626 LINK reconnect 00:03:30.884 LINK hello_fsdev 00:03:30.884 CXX test/cpp_headers/blobfs.o 00:03:30.884 LINK nvme_manage 00:03:30.884 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:30.884 CC test/env/vtophys/vtophys.o 00:03:30.884 LINK mem_callbacks 00:03:30.884 LINK spdk_dd 00:03:30.884 CXX test/cpp_headers/blob.o 00:03:30.884 CC test/env/memory/memory_ut.o 00:03:30.884 CC examples/blob/hello_world/hello_blob.o 00:03:30.884 LINK accel_perf 00:03:31.140 CC examples/nvme/arbitration/arbitration.o 00:03:31.140 LINK vtophys 00:03:31.140 LINK env_dpdk_post_init 00:03:31.140 CXX test/cpp_headers/conf.o 00:03:31.140 LINK hello_blob 00:03:31.140 CXX test/cpp_headers/config.o 00:03:31.140 CXX test/cpp_headers/cpuset.o 00:03:31.140 CC examples/blob/cli/blobcli.o 00:03:31.140 CC app/fio/nvme/fio_plugin.o 00:03:31.396 CC test/event/event_perf/event_perf.o 00:03:31.396 CXX test/cpp_headers/crc16.o 00:03:31.396 CC test/nvme/aer/aer.o 00:03:31.396 LINK arbitration 00:03:31.396 CC examples/bdev/hello_world/hello_bdev.o 00:03:31.396 CC test/env/pci/pci_ut.o 00:03:31.396 LINK event_perf 00:03:31.396 CXX test/cpp_headers/crc32.o 00:03:31.396 CC examples/nvme/hotplug/hotplug.o 00:03:31.653 LINK aer 00:03:31.653 LINK hello_bdev 00:03:31.653 LINK iscsi_fuzz 00:03:31.653 CXX test/cpp_headers/crc64.o 00:03:31.653 CC test/event/reactor/reactor.o 00:03:31.653 LINK hotplug 00:03:31.653 LINK blobcli 00:03:31.653 LINK pci_ut 00:03:31.653 CC test/nvme/reset/reset.o 00:03:31.653 LINK spdk_nvme 00:03:31.653 CXX test/cpp_headers/dif.o 00:03:31.653 LINK reactor 00:03:31.910 CC test/app/histogram_perf/histogram_perf.o 00:03:31.910 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.910 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.910 CXX test/cpp_headers/dma.o 00:03:31.910 CC examples/nvme/abort/abort.o 00:03:31.910 CC app/fio/bdev/fio_plugin.o 00:03:31.910 LINK reset 00:03:31.910 LINK histogram_perf 00:03:31.910 CC test/event/reactor_perf/reactor_perf.o 00:03:31.910 CC test/event/app_repeat/app_repeat.o 00:03:31.910 CXX test/cpp_headers/endian.o 00:03:31.910 LINK cmb_copy 00:03:32.167 LINK memory_ut 00:03:32.167 LINK reactor_perf 00:03:32.167 LINK app_repeat 00:03:32.167 CC test/nvme/sgl/sgl.o 00:03:32.167 CC test/app/jsoncat/jsoncat.o 00:03:32.167 CXX test/cpp_headers/env_dpdk.o 00:03:32.167 LINK abort 00:03:32.167 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.167 LINK jsoncat 00:03:32.167 CC test/rpc_client/rpc_client_test.o 00:03:32.167 CC test/event/scheduler/scheduler.o 00:03:32.167 CXX test/cpp_headers/env.o 00:03:32.425 LINK sgl 00:03:32.425 LINK pmr_persistence 00:03:32.425 CC test/accel/dif/dif.o 00:03:32.425 LINK spdk_bdev 00:03:32.425 CXX test/cpp_headers/event.o 00:03:32.425 CC test/app/stub/stub.o 00:03:32.425 LINK rpc_client_test 00:03:32.425 CC test/blobfs/mkfs/mkfs.o 00:03:32.425 LINK scheduler 00:03:32.425 CC test/nvme/e2edp/nvme_dp.o 00:03:32.425 CC test/nvme/overhead/overhead.o 00:03:32.684 CC test/nvme/err_injection/err_injection.o 00:03:32.684 CXX test/cpp_headers/fd_group.o 00:03:32.684 LINK stub 00:03:32.684 CC test/nvme/startup/startup.o 00:03:32.684 LINK mkfs 00:03:32.684 LINK bdevperf 00:03:32.684 CXX test/cpp_headers/fd.o 00:03:32.684 CXX test/cpp_headers/file.o 00:03:32.684 LINK err_injection 00:03:32.684 LINK startup 00:03:32.684 CXX test/cpp_headers/fsdev.o 00:03:32.684 LINK nvme_dp 00:03:32.684 LINK overhead 00:03:32.684 CC test/nvme/reserve/reserve.o 00:03:32.941 CXX test/cpp_headers/fsdev_module.o 00:03:32.941 CXX test/cpp_headers/ftl.o 00:03:32.941 CC test/lvol/esnap/esnap.o 00:03:32.941 CC test/nvme/connect_stress/connect_stress.o 00:03:32.941 LINK dif 00:03:32.941 CC test/nvme/simple_copy/simple_copy.o 00:03:32.941 CC examples/nvmf/nvmf/nvmf.o 00:03:32.941 LINK reserve 00:03:32.941 CC test/nvme/boot_partition/boot_partition.o 00:03:32.941 CC test/nvme/compliance/nvme_compliance.o 00:03:32.941 CXX test/cpp_headers/fuse_dispatcher.o 00:03:32.941 CXX test/cpp_headers/gpt_spec.o 00:03:33.197 LINK connect_stress 00:03:33.197 LINK simple_copy 00:03:33.197 LINK boot_partition 00:03:33.197 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.197 CXX test/cpp_headers/hexlify.o 00:03:33.197 CXX test/cpp_headers/histogram_data.o 00:03:33.197 CC test/bdev/bdevio/bdevio.o 00:03:33.197 LINK nvmf 00:03:33.197 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.197 LINK nvme_compliance 00:03:33.197 CC test/nvme/fdp/fdp.o 00:03:33.454 CC test/nvme/cuse/cuse.o 00:03:33.454 CXX test/cpp_headers/idxd.o 00:03:33.454 CXX test/cpp_headers/idxd_spec.o 00:03:33.454 LINK fused_ordering 00:03:33.454 CXX test/cpp_headers/init.o 00:03:33.454 CXX test/cpp_headers/ioat.o 00:03:33.454 LINK doorbell_aers 00:03:33.454 CXX test/cpp_headers/ioat_spec.o 00:03:33.454 CXX test/cpp_headers/iscsi_spec.o 00:03:33.454 CXX test/cpp_headers/json.o 00:03:33.454 CXX test/cpp_headers/jsonrpc.o 00:03:33.454 LINK bdevio 00:03:33.454 CXX test/cpp_headers/keyring.o 00:03:33.711 CXX test/cpp_headers/keyring_module.o 00:03:33.711 CXX test/cpp_headers/likely.o 00:03:33.711 CXX test/cpp_headers/log.o 00:03:33.711 CXX test/cpp_headers/lvol.o 00:03:33.711 CXX test/cpp_headers/md5.o 00:03:33.711 CXX test/cpp_headers/memory.o 00:03:33.711 LINK fdp 00:03:33.711 CXX test/cpp_headers/mmio.o 00:03:33.711 CXX test/cpp_headers/nbd.o 00:03:33.711 CXX test/cpp_headers/net.o 00:03:33.711 CXX test/cpp_headers/notify.o 00:03:33.711 CXX test/cpp_headers/nvme.o 00:03:33.711 CXX test/cpp_headers/nvme_intel.o 00:03:33.711 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.711 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.711 CXX test/cpp_headers/nvme_spec.o 00:03:33.970 CXX test/cpp_headers/nvme_zns.o 00:03:33.970 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.970 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.970 CXX test/cpp_headers/nvmf.o 00:03:33.970 CXX test/cpp_headers/nvmf_spec.o 00:03:33.970 CXX test/cpp_headers/nvmf_transport.o 00:03:33.970 CXX test/cpp_headers/opal.o 00:03:33.970 CXX test/cpp_headers/opal_spec.o 00:03:33.970 CXX test/cpp_headers/pci_ids.o 00:03:33.970 CXX test/cpp_headers/pipe.o 00:03:33.970 CXX test/cpp_headers/queue.o 00:03:33.970 CXX test/cpp_headers/reduce.o 00:03:33.970 CXX test/cpp_headers/rpc.o 00:03:33.970 CXX test/cpp_headers/scheduler.o 00:03:33.970 CXX test/cpp_headers/scsi.o 00:03:33.970 CXX test/cpp_headers/scsi_spec.o 00:03:34.228 CXX test/cpp_headers/sock.o 00:03:34.228 CXX test/cpp_headers/stdinc.o 00:03:34.228 CXX test/cpp_headers/string.o 00:03:34.228 CXX test/cpp_headers/thread.o 00:03:34.228 CXX test/cpp_headers/trace.o 00:03:34.228 CXX test/cpp_headers/trace_parser.o 00:03:34.228 CXX test/cpp_headers/tree.o 00:03:34.228 CXX test/cpp_headers/ublk.o 00:03:34.228 CXX test/cpp_headers/util.o 00:03:34.228 CXX test/cpp_headers/uuid.o 00:03:34.228 LINK cuse 00:03:34.228 CXX test/cpp_headers/version.o 00:03:34.228 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.228 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.228 CXX test/cpp_headers/vhost.o 00:03:34.228 CXX test/cpp_headers/vmd.o 00:03:34.498 CXX test/cpp_headers/xor.o 00:03:34.498 CXX test/cpp_headers/zipf.o 00:03:37.872 LINK esnap 00:03:37.872 00:03:37.872 real 1m6.627s 00:03:37.872 user 6m11.630s 00:03:37.872 sys 1m7.452s 00:03:37.872 23:05:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.872 23:05:09 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.872 ************************************ 00:03:37.872 END TEST make 00:03:37.872 ************************************ 00:03:37.872 23:05:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.872 23:05:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.872 23:05:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.872 23:05:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.872 23:05:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.872 23:05:09 -- pm/common@44 -- $ pid=5072 00:03:37.872 23:05:09 -- pm/common@50 -- $ kill -TERM 5072 00:03:37.872 23:05:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.872 23:05:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.872 23:05:09 -- pm/common@44 -- $ pid=5073 00:03:37.872 23:05:09 -- pm/common@50 -- $ kill -TERM 5073 00:03:37.872 23:05:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:37.872 23:05:09 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:37.872 23:05:10 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:37.872 23:05:10 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:37.872 23:05:10 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:37.872 23:05:10 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:37.872 23:05:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.872 23:05:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.872 23:05:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.872 23:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.872 23:05:10 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.872 23:05:10 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.872 23:05:10 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.872 23:05:10 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.873 23:05:10 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.873 23:05:10 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.873 23:05:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.873 23:05:10 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.873 23:05:10 -- scripts/common.sh@345 -- # : 1 00:03:37.873 23:05:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.873 23:05:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.873 23:05:10 -- scripts/common.sh@365 -- # decimal 1 00:03:37.873 23:05:10 -- scripts/common.sh@353 -- # local d=1 00:03:37.873 23:05:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.873 23:05:10 -- scripts/common.sh@355 -- # echo 1 00:03:37.873 23:05:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.873 23:05:10 -- scripts/common.sh@366 -- # decimal 2 00:03:37.873 23:05:10 -- scripts/common.sh@353 -- # local d=2 00:03:37.873 23:05:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.873 23:05:10 -- scripts/common.sh@355 -- # echo 2 00:03:37.873 23:05:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.873 23:05:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.873 23:05:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.873 23:05:10 -- scripts/common.sh@368 -- # return 0 00:03:37.873 23:05:10 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.873 23:05:10 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.873 --rc genhtml_branch_coverage=1 00:03:37.873 --rc genhtml_function_coverage=1 00:03:37.873 --rc genhtml_legend=1 00:03:37.873 --rc geninfo_all_blocks=1 00:03:37.873 --rc geninfo_unexecuted_blocks=1 00:03:37.873 00:03:37.873 ' 00:03:37.873 23:05:10 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.873 --rc genhtml_branch_coverage=1 00:03:37.873 --rc genhtml_function_coverage=1 00:03:37.873 --rc genhtml_legend=1 00:03:37.873 --rc geninfo_all_blocks=1 00:03:37.873 --rc geninfo_unexecuted_blocks=1 00:03:37.873 00:03:37.873 ' 00:03:37.873 23:05:10 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.873 --rc genhtml_branch_coverage=1 00:03:37.873 --rc genhtml_function_coverage=1 00:03:37.873 --rc genhtml_legend=1 00:03:37.873 --rc geninfo_all_blocks=1 00:03:37.873 --rc geninfo_unexecuted_blocks=1 00:03:37.873 00:03:37.873 ' 00:03:37.873 23:05:10 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.873 --rc genhtml_branch_coverage=1 00:03:37.873 --rc genhtml_function_coverage=1 00:03:37.873 --rc genhtml_legend=1 00:03:37.873 --rc geninfo_all_blocks=1 00:03:37.873 --rc geninfo_unexecuted_blocks=1 00:03:37.873 00:03:37.873 ' 00:03:37.873 23:05:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:37.873 23:05:10 -- nvmf/common.sh@7 -- # uname -s 00:03:37.873 23:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.873 23:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.873 23:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.873 23:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.873 23:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.873 23:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.873 23:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.873 23:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.873 23:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.873 23:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.873 23:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:70c0241f-21df-45be-86df-ddce1c85fb81 00:03:37.873 23:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=70c0241f-21df-45be-86df-ddce1c85fb81 00:03:37.873 23:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.873 23:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.873 23:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:37.873 23:05:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.873 23:05:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:37.873 23:05:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.873 23:05:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.873 23:05:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.873 23:05:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.873 23:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.873 23:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.873 23:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.873 23:05:10 -- paths/export.sh@5 -- # export PATH 00:03:37.873 23:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.873 23:05:10 -- nvmf/common.sh@51 -- # : 0 00:03:37.873 23:05:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.873 23:05:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.873 23:05:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.873 23:05:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.873 23:05:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.873 23:05:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.873 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.873 23:05:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.873 23:05:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.873 23:05:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.873 23:05:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.873 23:05:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.873 23:05:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.873 23:05:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.873 23:05:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.873 23:05:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.873 23:05:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:37.873 23:05:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.873 23:05:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.873 23:05:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.873 23:05:10 -- spdk/autotest.sh@48 -- # udevadm_pid=54235 00:03:37.873 23:05:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.873 23:05:10 -- pm/common@17 -- # local monitor 00:03:37.873 23:05:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.873 23:05:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.873 23:05:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.873 23:05:10 -- pm/common@25 -- # sleep 1 00:03:37.873 23:05:10 -- pm/common@21 -- # date +%s 00:03:37.873 23:05:10 -- pm/common@21 -- # date +%s 00:03:37.873 23:05:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732575910 00:03:37.874 23:05:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732575910 00:03:37.874 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732575910_collect-cpu-load.pm.log 00:03:37.874 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732575910_collect-vmstat.pm.log 00:03:39.250 23:05:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.250 23:05:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:39.250 23:05:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.250 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:03:39.250 23:05:11 -- spdk/autotest.sh@59 -- # create_test_list 00:03:39.250 23:05:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:39.250 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:03:39.250 23:05:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:39.250 23:05:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:39.250 23:05:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:39.250 23:05:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:39.250 23:05:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:39.250 23:05:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:39.250 23:05:11 -- common/autotest_common.sh@1457 -- # uname 00:03:39.250 23:05:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:39.250 23:05:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:39.250 23:05:11 -- common/autotest_common.sh@1477 -- # uname 00:03:39.250 23:05:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:39.250 23:05:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:39.250 23:05:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:39.250 lcov: LCOV version 1.15 00:03:39.250 23:05:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:54.133 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.133 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:09.113 23:05:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:09.113 23:05:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.113 23:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:09.113 23:05:39 -- spdk/autotest.sh@78 -- # rm -f 00:04:09.113 23:05:39 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.113 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:09.113 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:09.113 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:09.113 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:09.113 23:05:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:09.113 23:05:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:09.113 23:05:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:09.113 23:05:40 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:09.113 23:05:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:09.113 23:05:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:09.113 23:05:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:09.113 23:05:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:09.113 23:05:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.113 23:05:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:09.113 23:05:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:09.113 23:05:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:09.113 23:05:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:09.113 No valid GPT data, bailing 00:04:09.113 23:05:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.113 23:05:40 -- scripts/common.sh@394 -- # pt= 00:04:09.113 23:05:40 -- scripts/common.sh@395 -- # return 1 00:04:09.113 23:05:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:09.113 1+0 records in 00:04:09.113 1+0 records out 00:04:09.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350039 s, 30.0 MB/s 00:04:09.113 23:05:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.113 23:05:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:09.113 23:05:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:09.113 23:05:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:09.113 23:05:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:09.113 No valid GPT data, bailing 00:04:09.113 23:05:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:09.113 23:05:40 -- scripts/common.sh@394 -- # pt= 00:04:09.113 23:05:40 -- scripts/common.sh@395 -- # return 1 00:04:09.113 23:05:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:09.113 1+0 records in 00:04:09.113 1+0 records out 00:04:09.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597716 s, 175 MB/s 00:04:09.113 23:05:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.113 23:05:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:09.113 23:05:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:09.113 23:05:40 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:09.113 23:05:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:09.113 No valid GPT data, bailing 00:04:09.113 23:05:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:09.113 23:05:41 -- scripts/common.sh@394 -- # pt= 00:04:09.113 23:05:41 -- scripts/common.sh@395 -- # return 1 00:04:09.113 23:05:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:09.113 1+0 records in 00:04:09.113 1+0 records out 00:04:09.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00702278 s, 149 MB/s 00:04:09.113 23:05:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.113 23:05:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:09.113 23:05:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:09.113 23:05:41 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:09.113 23:05:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:09.113 No valid GPT data, bailing 00:04:09.113 23:05:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:09.113 23:05:41 -- scripts/common.sh@394 -- # pt= 00:04:09.113 23:05:41 -- scripts/common.sh@395 -- # return 1 00:04:09.113 23:05:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:09.113 1+0 records in 00:04:09.113 1+0 records out 00:04:09.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498499 s, 210 MB/s 00:04:09.113 23:05:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.113 23:05:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:09.113 23:05:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:09.113 23:05:41 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:09.113 23:05:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:09.113 No valid GPT data, bailing 00:04:09.114 23:05:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:09.114 23:05:41 -- scripts/common.sh@394 -- # pt= 00:04:09.114 23:05:41 -- scripts/common.sh@395 -- # return 1 00:04:09.114 23:05:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:09.114 1+0 records in 00:04:09.114 1+0 records out 00:04:09.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595928 s, 176 MB/s 00:04:09.114 23:05:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:09.114 23:05:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:09.114 23:05:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:09.114 23:05:41 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:09.114 23:05:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:09.114 No valid GPT data, bailing 00:04:09.114 23:05:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:09.114 23:05:41 -- scripts/common.sh@394 -- # pt= 00:04:09.114 23:05:41 -- scripts/common.sh@395 -- # return 1 00:04:09.114 23:05:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:09.114 1+0 records in 00:04:09.114 1+0 records out 00:04:09.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584732 s, 179 MB/s 00:04:09.114 23:05:41 -- spdk/autotest.sh@105 -- # sync 00:04:09.114 23:05:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:09.114 23:05:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:09.114 23:05:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.047 23:05:43 -- spdk/autotest.sh@111 -- # uname -s 00:04:11.047 23:05:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:11.047 23:05:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:11.047 23:05:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.568 Hugepages 00:04:11.568 node hugesize free / total 00:04:11.568 node0 1048576kB 0 / 0 00:04:11.568 node0 2048kB 0 / 0 00:04:11.568 00:04:11.568 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.833 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.833 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.833 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:11.833 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:12.107 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:12.107 23:05:44 -- spdk/autotest.sh@117 -- # uname -s 00:04:12.107 23:05:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:12.107 23:05:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:12.107 23:05:44 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.941 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.941 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.941 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.202 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.202 23:05:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:14.144 23:05:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:14.144 23:05:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:14.145 23:05:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.145 23:05:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:14.145 23:05:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:14.145 23:05:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:14.145 23:05:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.145 23:05:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:14.145 23:05:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:14.145 23:05:46 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:14.145 23:05:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:14.145 23:05:46 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.732 Waiting for block devices as requested 00:04:14.732 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.732 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.007 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.007 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.286 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:20.286 23:05:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:20.286 23:05:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:20.286 23:05:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.286 23:05:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.286 23:05:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:20.286 23:05:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.286 23:05:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:20.286 23:05:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:20.286 23:05:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:20.286 23:05:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:20.286 23:05:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:20.286 23:05:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:20.286 23:05:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:20.286 23:05:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:20.286 23:05:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:20.286 23:05:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:20.286 23:05:52 -- common/autotest_common.sh@1543 -- # continue 00:04:20.286 23:05:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:20.286 23:05:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:20.286 23:05:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.286 23:05:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:20.286 23:05:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:20.286 23:05:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:20.286 23:05:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:20.287 23:05:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:20.287 23:05:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1543 -- # continue 00:04:20.287 23:05:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:20.287 23:05:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:20.287 23:05:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:20.287 23:05:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:20.287 23:05:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:20.287 23:05:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:20.287 23:05:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1543 -- # continue 00:04:20.287 23:05:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:20.287 23:05:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:20.287 23:05:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:20.287 23:05:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:20.287 23:05:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:20.287 23:05:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:20.287 23:05:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:20.287 23:05:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:20.287 23:05:52 -- common/autotest_common.sh@1543 -- # continue 00:04:20.287 23:05:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:20.287 23:05:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.287 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:04:20.287 23:05:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:20.287 23:05:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.287 23:05:52 -- common/autotest_common.sh@10 -- # set +x 00:04:20.287 23:05:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.130 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.130 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.130 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.130 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.130 23:05:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:21.130 23:05:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.130 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.130 23:05:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:21.130 23:05:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:21.130 23:05:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:21.130 23:05:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:21.130 23:05:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:21.130 23:05:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:21.130 23:05:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:21.130 23:05:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:21.130 23:05:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:21.130 23:05:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:21.130 23:05:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.130 23:05:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:21.130 23:05:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:21.130 23:05:53 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:21.130 23:05:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:21.130 23:05:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:21.130 23:05:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:21.130 23:05:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:21.130 23:05:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.130 23:05:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:21.130 23:05:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:21.130 23:05:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:21.130 23:05:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.130 23:05:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:21.130 23:05:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:21.130 23:05:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:21.130 23:05:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.131 23:05:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:21.131 23:05:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:21.131 23:05:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:21.131 23:05:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:21.131 23:05:53 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:21.131 23:05:53 -- common/autotest_common.sh@1572 -- # return 0 00:04:21.131 23:05:53 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:21.131 23:05:53 -- common/autotest_common.sh@1580 -- # return 0 00:04:21.131 23:05:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:21.131 23:05:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:21.131 23:05:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:21.131 23:05:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:21.131 23:05:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:21.131 23:05:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.131 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.131 23:05:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:21.131 23:05:53 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:21.131 23:05:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.131 23:05:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.131 23:05:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.394 ************************************ 00:04:21.394 START TEST env 00:04:21.394 ************************************ 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:21.394 * Looking for test storage... 00:04:21.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.394 23:05:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.394 23:05:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.394 23:05:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.394 23:05:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.394 23:05:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.394 23:05:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.394 23:05:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.394 23:05:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.394 23:05:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.394 23:05:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.394 23:05:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.394 23:05:53 env -- scripts/common.sh@344 -- # case "$op" in 00:04:21.394 23:05:53 env -- scripts/common.sh@345 -- # : 1 00:04:21.394 23:05:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.394 23:05:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.394 23:05:53 env -- scripts/common.sh@365 -- # decimal 1 00:04:21.394 23:05:53 env -- scripts/common.sh@353 -- # local d=1 00:04:21.394 23:05:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.394 23:05:53 env -- scripts/common.sh@355 -- # echo 1 00:04:21.394 23:05:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.394 23:05:53 env -- scripts/common.sh@366 -- # decimal 2 00:04:21.394 23:05:53 env -- scripts/common.sh@353 -- # local d=2 00:04:21.394 23:05:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.394 23:05:53 env -- scripts/common.sh@355 -- # echo 2 00:04:21.394 23:05:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.394 23:05:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.394 23:05:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.394 23:05:53 env -- scripts/common.sh@368 -- # return 0 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.394 --rc genhtml_branch_coverage=1 00:04:21.394 --rc genhtml_function_coverage=1 00:04:21.394 --rc genhtml_legend=1 00:04:21.394 --rc geninfo_all_blocks=1 00:04:21.394 --rc geninfo_unexecuted_blocks=1 00:04:21.394 00:04:21.394 ' 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.394 --rc genhtml_branch_coverage=1 00:04:21.394 --rc genhtml_function_coverage=1 00:04:21.394 --rc genhtml_legend=1 00:04:21.394 --rc geninfo_all_blocks=1 00:04:21.394 --rc geninfo_unexecuted_blocks=1 00:04:21.394 00:04:21.394 ' 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.394 --rc genhtml_branch_coverage=1 00:04:21.394 --rc genhtml_function_coverage=1 00:04:21.394 --rc genhtml_legend=1 00:04:21.394 --rc geninfo_all_blocks=1 00:04:21.394 --rc geninfo_unexecuted_blocks=1 00:04:21.394 00:04:21.394 ' 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.394 --rc genhtml_branch_coverage=1 00:04:21.394 --rc genhtml_function_coverage=1 00:04:21.394 --rc genhtml_legend=1 00:04:21.394 --rc geninfo_all_blocks=1 00:04:21.394 --rc geninfo_unexecuted_blocks=1 00:04:21.394 00:04:21.394 ' 00:04:21.394 23:05:53 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.394 23:05:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.394 23:05:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.394 ************************************ 00:04:21.394 START TEST env_memory 00:04:21.394 ************************************ 00:04:21.394 23:05:53 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:21.394 00:04:21.394 00:04:21.394 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.394 http://cunit.sourceforge.net/ 00:04:21.394 00:04:21.394 00:04:21.394 Suite: memory 00:04:21.394 Test: alloc and free memory map ...[2024-11-25 23:05:53.691965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:21.394 passed 00:04:21.395 Test: mem map translation ...[2024-11-25 23:05:53.730548] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:21.395 [2024-11-25 23:05:53.730586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:21.395 [2024-11-25 23:05:53.730644] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:21.395 [2024-11-25 23:05:53.730656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:21.653 passed 00:04:21.653 Test: mem map registration ...[2024-11-25 23:05:53.798571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:21.653 [2024-11-25 23:05:53.798607] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:21.653 passed 00:04:21.653 Test: mem map adjacent registrations ...passed 00:04:21.653 00:04:21.653 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.653 suites 1 1 n/a 0 0 00:04:21.653 tests 4 4 4 0 0 00:04:21.653 asserts 152 152 152 0 n/a 00:04:21.653 00:04:21.653 Elapsed time = 0.232 seconds 00:04:21.653 00:04:21.653 real 0m0.266s 00:04:21.653 user 0m0.241s 00:04:21.653 sys 0m0.018s 00:04:21.653 23:05:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.653 23:05:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:21.653 ************************************ 00:04:21.653 END TEST env_memory 00:04:21.653 ************************************ 00:04:21.653 23:05:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:21.653 23:05:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.653 23:05:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.653 23:05:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.653 ************************************ 00:04:21.653 START TEST env_vtophys 00:04:21.653 ************************************ 00:04:21.653 23:05:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:21.653 EAL: lib.eal log level changed from notice to debug 00:04:21.653 EAL: Detected lcore 0 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 1 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 2 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 3 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 4 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 5 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 6 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 7 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 8 as core 0 on socket 0 00:04:21.653 EAL: Detected lcore 9 as core 0 on socket 0 00:04:21.653 EAL: Maximum logical cores by configuration: 128 00:04:21.653 EAL: Detected CPU lcores: 10 00:04:21.653 EAL: Detected NUMA nodes: 1 00:04:21.653 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:21.653 EAL: Detected shared linkage of DPDK 00:04:21.653 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.653 EAL: Selected IOVA mode 'PA' 00:04:21.653 EAL: Probing VFIO support... 00:04:21.653 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:21.653 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:21.653 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.653 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.653 EAL: Setting up physically contiguous memory... 00:04:21.653 EAL: Setting maximum number of open files to 524288 00:04:21.653 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.653 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.653 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.653 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.653 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.653 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.653 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.653 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.653 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.653 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.653 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.653 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.653 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.653 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.653 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.653 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.653 EAL: Hugepages will be freed exactly as allocated. 00:04:21.653 EAL: No shared files mode enabled, IPC is disabled 00:04:21.653 EAL: No shared files mode enabled, IPC is disabled 00:04:21.914 EAL: TSC frequency is ~2600000 KHz 00:04:21.914 EAL: Main lcore 0 is ready (tid=7f0a852a9a40;cpuset=[0]) 00:04:21.914 EAL: Trying to obtain current memory policy. 00:04:21.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.914 EAL: Restoring previous memory policy: 0 00:04:21.914 EAL: request: mp_malloc_sync 00:04:21.914 EAL: No shared files mode enabled, IPC is disabled 00:04:21.914 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.914 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:21.914 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:21.914 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.914 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:21.914 00:04:21.914 00:04:21.914 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.914 http://cunit.sourceforge.net/ 00:04:21.914 00:04:21.914 00:04:21.914 Suite: components_suite 00:04:22.175 Test: vtophys_malloc_test ...passed 00:04:22.175 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:22.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.175 EAL: Restoring previous memory policy: 4 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was expanded by 4MB 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was shrunk by 4MB 00:04:22.175 EAL: Trying to obtain current memory policy. 00:04:22.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.175 EAL: Restoring previous memory policy: 4 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was expanded by 6MB 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was shrunk by 6MB 00:04:22.175 EAL: Trying to obtain current memory policy. 00:04:22.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.175 EAL: Restoring previous memory policy: 4 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was expanded by 10MB 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was shrunk by 10MB 00:04:22.175 EAL: Trying to obtain current memory policy. 00:04:22.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.175 EAL: Restoring previous memory policy: 4 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was expanded by 18MB 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was shrunk by 18MB 00:04:22.175 EAL: Trying to obtain current memory policy. 00:04:22.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.175 EAL: Restoring previous memory policy: 4 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was expanded by 34MB 00:04:22.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.175 EAL: request: mp_malloc_sync 00:04:22.175 EAL: No shared files mode enabled, IPC is disabled 00:04:22.175 EAL: Heap on socket 0 was shrunk by 34MB 00:04:22.434 EAL: Trying to obtain current memory policy. 00:04:22.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.434 EAL: Restoring previous memory policy: 4 00:04:22.434 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.434 EAL: request: mp_malloc_sync 00:04:22.434 EAL: No shared files mode enabled, IPC is disabled 00:04:22.434 EAL: Heap on socket 0 was expanded by 66MB 00:04:22.434 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.434 EAL: request: mp_malloc_sync 00:04:22.434 EAL: No shared files mode enabled, IPC is disabled 00:04:22.434 EAL: Heap on socket 0 was shrunk by 66MB 00:04:22.434 EAL: Trying to obtain current memory policy. 00:04:22.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.434 EAL: Restoring previous memory policy: 4 00:04:22.434 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.434 EAL: request: mp_malloc_sync 00:04:22.434 EAL: No shared files mode enabled, IPC is disabled 00:04:22.434 EAL: Heap on socket 0 was expanded by 130MB 00:04:22.692 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.692 EAL: request: mp_malloc_sync 00:04:22.692 EAL: No shared files mode enabled, IPC is disabled 00:04:22.692 EAL: Heap on socket 0 was shrunk by 130MB 00:04:22.692 EAL: Trying to obtain current memory policy. 00:04:22.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.692 EAL: Restoring previous memory policy: 4 00:04:22.692 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.692 EAL: request: mp_malloc_sync 00:04:22.692 EAL: No shared files mode enabled, IPC is disabled 00:04:22.692 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.259 EAL: request: mp_malloc_sync 00:04:23.259 EAL: No shared files mode enabled, IPC is disabled 00:04:23.259 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.259 EAL: Trying to obtain current memory policy. 00:04:23.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.517 EAL: Restoring previous memory policy: 4 00:04:23.517 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.517 EAL: request: mp_malloc_sync 00:04:23.517 EAL: No shared files mode enabled, IPC is disabled 00:04:23.517 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.083 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.083 EAL: request: mp_malloc_sync 00:04:24.083 EAL: No shared files mode enabled, IPC is disabled 00:04:24.083 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.648 EAL: Trying to obtain current memory policy. 00:04:24.648 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.648 EAL: Restoring previous memory policy: 4 00:04:24.648 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.648 EAL: request: mp_malloc_sync 00:04:24.648 EAL: No shared files mode enabled, IPC is disabled 00:04:24.648 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.041 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.041 EAL: request: mp_malloc_sync 00:04:26.041 EAL: No shared files mode enabled, IPC is disabled 00:04:26.041 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:26.985 passed 00:04:26.985 00:04:26.985 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.985 suites 1 1 n/a 0 0 00:04:26.985 tests 2 2 2 0 0 00:04:26.985 asserts 5810 5810 5810 0 n/a 00:04:26.985 00:04:26.986 Elapsed time = 4.850 seconds 00:04:26.986 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.986 EAL: request: mp_malloc_sync 00:04:26.986 EAL: No shared files mode enabled, IPC is disabled 00:04:26.986 EAL: Heap on socket 0 was shrunk by 2MB 00:04:26.986 EAL: No shared files mode enabled, IPC is disabled 00:04:26.986 EAL: No shared files mode enabled, IPC is disabled 00:04:26.986 EAL: No shared files mode enabled, IPC is disabled 00:04:26.986 ************************************ 00:04:26.986 END TEST env_vtophys 00:04:26.986 ************************************ 00:04:26.986 00:04:26.986 real 0m5.111s 00:04:26.986 user 0m4.321s 00:04:26.986 sys 0m0.648s 00:04:26.986 23:05:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.986 23:05:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:26.986 23:05:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:26.986 23:05:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.986 23:05:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.986 23:05:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.986 ************************************ 00:04:26.986 START TEST env_pci 00:04:26.986 ************************************ 00:04:26.986 23:05:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:26.986 00:04:26.986 00:04:26.986 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.986 http://cunit.sourceforge.net/ 00:04:26.986 00:04:26.986 00:04:26.986 Suite: pci 00:04:26.986 Test: pci_hook ...[2024-11-25 23:05:59.125082] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56982 has claimed it 00:04:26.986 passed 00:04:26.986 00:04:26.986 EAL: Cannot find device (10000:00:01.0) 00:04:26.986 EAL: Failed to attach device on primary process 00:04:26.987 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.987 suites 1 1 n/a 0 0 00:04:26.987 tests 1 1 1 0 0 00:04:26.987 asserts 25 25 25 0 n/a 00:04:26.987 00:04:26.987 Elapsed time = 0.003 seconds 00:04:26.987 00:04:26.987 real 0m0.059s 00:04:26.987 user 0m0.028s 00:04:26.987 sys 0m0.030s 00:04:26.987 23:05:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.987 ************************************ 00:04:26.987 END TEST env_pci 00:04:26.987 ************************************ 00:04:26.987 23:05:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:26.987 23:05:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.987 23:05:59 env -- env/env.sh@15 -- # uname 00:04:26.987 23:05:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.987 23:05:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.987 23:05:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.987 23:05:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:26.987 23:05:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.987 23:05:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.987 ************************************ 00:04:26.987 START TEST env_dpdk_post_init 00:04:26.987 ************************************ 00:04:26.987 23:05:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.987 EAL: Detected CPU lcores: 10 00:04:26.987 EAL: Detected NUMA nodes: 1 00:04:26.987 EAL: Detected shared linkage of DPDK 00:04:26.987 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.987 EAL: Selected IOVA mode 'PA' 00:04:27.248 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.249 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:27.249 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:27.249 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:27.249 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:27.249 Starting DPDK initialization... 00:04:27.249 Starting SPDK post initialization... 00:04:27.249 SPDK NVMe probe 00:04:27.249 Attaching to 0000:00:10.0 00:04:27.249 Attaching to 0000:00:11.0 00:04:27.249 Attaching to 0000:00:12.0 00:04:27.249 Attaching to 0000:00:13.0 00:04:27.249 Attached to 0000:00:10.0 00:04:27.249 Attached to 0000:00:11.0 00:04:27.249 Attached to 0000:00:13.0 00:04:27.249 Attached to 0000:00:12.0 00:04:27.249 Cleaning up... 00:04:27.249 00:04:27.249 real 0m0.245s 00:04:27.249 user 0m0.081s 00:04:27.249 sys 0m0.065s 00:04:27.249 23:05:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.249 23:05:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.249 ************************************ 00:04:27.249 END TEST env_dpdk_post_init 00:04:27.249 ************************************ 00:04:27.249 23:05:59 env -- env/env.sh@26 -- # uname 00:04:27.249 23:05:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.249 23:05:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.249 23:05:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.249 23:05:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.249 23:05:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.249 ************************************ 00:04:27.249 START TEST env_mem_callbacks 00:04:27.249 ************************************ 00:04:27.249 23:05:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.249 EAL: Detected CPU lcores: 10 00:04:27.249 EAL: Detected NUMA nodes: 1 00:04:27.249 EAL: Detected shared linkage of DPDK 00:04:27.249 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.249 EAL: Selected IOVA mode 'PA' 00:04:27.507 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.507 00:04:27.507 00:04:27.507 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.507 http://cunit.sourceforge.net/ 00:04:27.507 00:04:27.507 00:04:27.507 Suite: memory 00:04:27.507 Test: test ... 00:04:27.507 register 0x200000200000 2097152 00:04:27.507 malloc 3145728 00:04:27.507 register 0x200000400000 4194304 00:04:27.507 buf 0x2000004fffc0 len 3145728 PASSED 00:04:27.507 malloc 64 00:04:27.507 buf 0x2000004ffec0 len 64 PASSED 00:04:27.507 malloc 4194304 00:04:27.507 register 0x200000800000 6291456 00:04:27.507 buf 0x2000009fffc0 len 4194304 PASSED 00:04:27.507 free 0x2000004fffc0 3145728 00:04:27.507 free 0x2000004ffec0 64 00:04:27.507 unregister 0x200000400000 4194304 PASSED 00:04:27.507 free 0x2000009fffc0 4194304 00:04:27.507 unregister 0x200000800000 6291456 PASSED 00:04:27.507 malloc 8388608 00:04:27.507 register 0x200000400000 10485760 00:04:27.507 buf 0x2000005fffc0 len 8388608 PASSED 00:04:27.507 free 0x2000005fffc0 8388608 00:04:27.507 unregister 0x200000400000 10485760 PASSED 00:04:27.507 passed 00:04:27.507 00:04:27.507 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.507 suites 1 1 n/a 0 0 00:04:27.507 tests 1 1 1 0 0 00:04:27.507 asserts 15 15 15 0 n/a 00:04:27.507 00:04:27.507 Elapsed time = 0.044 seconds 00:04:27.507 00:04:27.507 real 0m0.211s 00:04:27.507 user 0m0.059s 00:04:27.507 sys 0m0.050s 00:04:27.507 23:05:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.507 23:05:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:27.507 ************************************ 00:04:27.507 END TEST env_mem_callbacks 00:04:27.507 ************************************ 00:04:27.507 ************************************ 00:04:27.507 END TEST env 00:04:27.507 ************************************ 00:04:27.507 00:04:27.507 real 0m6.239s 00:04:27.507 user 0m4.885s 00:04:27.507 sys 0m1.007s 00:04:27.507 23:05:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.507 23:05:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.507 23:05:59 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:27.507 23:05:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.507 23:05:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.507 23:05:59 -- common/autotest_common.sh@10 -- # set +x 00:04:27.507 ************************************ 00:04:27.507 START TEST rpc 00:04:27.507 ************************************ 00:04:27.507 23:05:59 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:27.507 * Looking for test storage... 00:04:27.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.507 23:05:59 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.507 23:05:59 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.507 23:05:59 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.765 23:05:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.765 23:05:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.765 23:05:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.765 23:05:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.765 23:05:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.765 23:05:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.765 23:05:59 rpc -- scripts/common.sh@345 -- # : 1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.765 23:05:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.765 23:05:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.765 23:05:59 rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.765 23:05:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.765 23:05:59 rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.765 23:05:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.765 23:05:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.765 23:05:59 rpc -- scripts/common.sh@368 -- # return 0 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.765 --rc genhtml_branch_coverage=1 00:04:27.765 --rc genhtml_function_coverage=1 00:04:27.765 --rc genhtml_legend=1 00:04:27.765 --rc geninfo_all_blocks=1 00:04:27.765 --rc geninfo_unexecuted_blocks=1 00:04:27.765 00:04:27.765 ' 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.765 --rc genhtml_branch_coverage=1 00:04:27.765 --rc genhtml_function_coverage=1 00:04:27.765 --rc genhtml_legend=1 00:04:27.765 --rc geninfo_all_blocks=1 00:04:27.765 --rc geninfo_unexecuted_blocks=1 00:04:27.765 00:04:27.765 ' 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.765 --rc genhtml_branch_coverage=1 00:04:27.765 --rc genhtml_function_coverage=1 00:04:27.765 --rc genhtml_legend=1 00:04:27.765 --rc geninfo_all_blocks=1 00:04:27.765 --rc geninfo_unexecuted_blocks=1 00:04:27.765 00:04:27.765 ' 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.765 --rc genhtml_branch_coverage=1 00:04:27.765 --rc genhtml_function_coverage=1 00:04:27.765 --rc genhtml_legend=1 00:04:27.765 --rc geninfo_all_blocks=1 00:04:27.765 --rc geninfo_unexecuted_blocks=1 00:04:27.765 00:04:27.765 ' 00:04:27.765 23:05:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:27.765 23:05:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57109 00:04:27.765 23:05:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.765 23:05:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57109 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 57109 ']' 00:04:27.765 23:05:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.766 23:05:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.766 23:05:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.766 23:05:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.766 23:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.766 [2024-11-25 23:05:59.984986] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:27.766 [2024-11-25 23:05:59.985115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57109 ] 00:04:28.024 [2024-11-25 23:06:00.141540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.024 [2024-11-25 23:06:00.239584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.024 [2024-11-25 23:06:00.239639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57109' to capture a snapshot of events at runtime. 00:04:28.024 [2024-11-25 23:06:00.239649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.024 [2024-11-25 23:06:00.239659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.024 [2024-11-25 23:06:00.239666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57109 for offline analysis/debug. 00:04:28.024 [2024-11-25 23:06:00.240567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.598 23:06:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.598 23:06:00 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:28.598 23:06:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.598 23:06:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.598 23:06:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:28.598 23:06:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:28.598 23:06:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.598 23:06:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.598 23:06:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.598 ************************************ 00:04:28.598 START TEST rpc_integrity 00:04:28.598 ************************************ 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:28.598 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.598 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.598 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.598 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.598 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.598 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.598 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:28.599 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.599 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.599 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.599 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.599 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.599 { 00:04:28.599 "name": "Malloc0", 00:04:28.599 "aliases": [ 00:04:28.599 "f6f07c7c-9bdb-435c-ae5a-ae74bf9169e0" 00:04:28.599 ], 00:04:28.599 "product_name": "Malloc disk", 00:04:28.599 "block_size": 512, 00:04:28.599 "num_blocks": 16384, 00:04:28.599 "uuid": "f6f07c7c-9bdb-435c-ae5a-ae74bf9169e0", 00:04:28.599 "assigned_rate_limits": { 00:04:28.599 "rw_ios_per_sec": 0, 00:04:28.599 "rw_mbytes_per_sec": 0, 00:04:28.599 "r_mbytes_per_sec": 0, 00:04:28.599 "w_mbytes_per_sec": 0 00:04:28.599 }, 00:04:28.599 "claimed": false, 00:04:28.599 "zoned": false, 00:04:28.599 "supported_io_types": { 00:04:28.599 "read": true, 00:04:28.599 "write": true, 00:04:28.599 "unmap": true, 00:04:28.599 "flush": true, 00:04:28.599 "reset": true, 00:04:28.599 "nvme_admin": false, 00:04:28.599 "nvme_io": false, 00:04:28.599 "nvme_io_md": false, 00:04:28.599 "write_zeroes": true, 00:04:28.599 "zcopy": true, 00:04:28.599 "get_zone_info": false, 00:04:28.599 "zone_management": false, 00:04:28.599 "zone_append": false, 00:04:28.599 "compare": false, 00:04:28.599 "compare_and_write": false, 00:04:28.599 "abort": true, 00:04:28.599 "seek_hole": false, 00:04:28.599 "seek_data": false, 00:04:28.599 "copy": true, 00:04:28.599 "nvme_iov_md": false 00:04:28.599 }, 00:04:28.599 "memory_domains": [ 00:04:28.599 { 00:04:28.599 "dma_device_id": "system", 00:04:28.599 "dma_device_type": 1 00:04:28.599 }, 00:04:28.599 { 00:04:28.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.599 "dma_device_type": 2 00:04:28.599 } 00:04:28.599 ], 00:04:28.599 "driver_specific": {} 00:04:28.599 } 00:04:28.599 ]' 00:04:28.600 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.600 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.600 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.600 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.600 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.600 [2024-11-25 23:06:00.943238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.600 [2024-11-25 23:06:00.943297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.600 [2024-11-25 23:06:00.943323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:28.600 [2024-11-25 23:06:00.943334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.600 [2024-11-25 23:06:00.945485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.600 [2024-11-25 23:06:00.945525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.600 Passthru0 00:04:28.600 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.600 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.600 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.600 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.868 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.868 { 00:04:28.868 "name": "Malloc0", 00:04:28.868 "aliases": [ 00:04:28.868 "f6f07c7c-9bdb-435c-ae5a-ae74bf9169e0" 00:04:28.868 ], 00:04:28.868 "product_name": "Malloc disk", 00:04:28.868 "block_size": 512, 00:04:28.868 "num_blocks": 16384, 00:04:28.868 "uuid": "f6f07c7c-9bdb-435c-ae5a-ae74bf9169e0", 00:04:28.868 "assigned_rate_limits": { 00:04:28.868 "rw_ios_per_sec": 0, 00:04:28.868 "rw_mbytes_per_sec": 0, 00:04:28.868 "r_mbytes_per_sec": 0, 00:04:28.868 "w_mbytes_per_sec": 0 00:04:28.868 }, 00:04:28.868 "claimed": true, 00:04:28.868 "claim_type": "exclusive_write", 00:04:28.868 "zoned": false, 00:04:28.868 "supported_io_types": { 00:04:28.868 "read": true, 00:04:28.868 "write": true, 00:04:28.868 "unmap": true, 00:04:28.868 "flush": true, 00:04:28.868 "reset": true, 00:04:28.868 "nvme_admin": false, 00:04:28.868 "nvme_io": false, 00:04:28.868 "nvme_io_md": false, 00:04:28.868 "write_zeroes": true, 00:04:28.868 "zcopy": true, 00:04:28.868 "get_zone_info": false, 00:04:28.868 "zone_management": false, 00:04:28.868 "zone_append": false, 00:04:28.868 "compare": false, 00:04:28.868 "compare_and_write": false, 00:04:28.868 "abort": true, 00:04:28.868 "seek_hole": false, 00:04:28.868 "seek_data": false, 00:04:28.868 "copy": true, 00:04:28.868 "nvme_iov_md": false 00:04:28.868 }, 00:04:28.868 "memory_domains": [ 00:04:28.868 { 00:04:28.868 "dma_device_id": "system", 00:04:28.868 "dma_device_type": 1 00:04:28.868 }, 00:04:28.868 { 00:04:28.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.868 "dma_device_type": 2 00:04:28.868 } 00:04:28.868 ], 00:04:28.868 "driver_specific": {} 00:04:28.868 }, 00:04:28.868 { 00:04:28.868 "name": "Passthru0", 00:04:28.868 "aliases": [ 00:04:28.868 "d2dc23cf-df65-52c6-978d-120f3fb92d86" 00:04:28.868 ], 00:04:28.868 "product_name": "passthru", 00:04:28.868 "block_size": 512, 00:04:28.868 "num_blocks": 16384, 00:04:28.868 "uuid": "d2dc23cf-df65-52c6-978d-120f3fb92d86", 00:04:28.868 "assigned_rate_limits": { 00:04:28.868 "rw_ios_per_sec": 0, 00:04:28.868 "rw_mbytes_per_sec": 0, 00:04:28.868 "r_mbytes_per_sec": 0, 00:04:28.868 "w_mbytes_per_sec": 0 00:04:28.868 }, 00:04:28.868 "claimed": false, 00:04:28.868 "zoned": false, 00:04:28.868 "supported_io_types": { 00:04:28.868 "read": true, 00:04:28.868 "write": true, 00:04:28.868 "unmap": true, 00:04:28.868 "flush": true, 00:04:28.868 "reset": true, 00:04:28.868 "nvme_admin": false, 00:04:28.868 "nvme_io": false, 00:04:28.868 "nvme_io_md": false, 00:04:28.868 "write_zeroes": true, 00:04:28.868 "zcopy": true, 00:04:28.868 "get_zone_info": false, 00:04:28.868 "zone_management": false, 00:04:28.868 "zone_append": false, 00:04:28.868 "compare": false, 00:04:28.868 "compare_and_write": false, 00:04:28.868 "abort": true, 00:04:28.868 "seek_hole": false, 00:04:28.868 "seek_data": false, 00:04:28.868 "copy": true, 00:04:28.868 "nvme_iov_md": false 00:04:28.868 }, 00:04:28.868 "memory_domains": [ 00:04:28.868 { 00:04:28.868 "dma_device_id": "system", 00:04:28.868 "dma_device_type": 1 00:04:28.868 }, 00:04:28.868 { 00:04:28.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.868 "dma_device_type": 2 00:04:28.868 } 00:04:28.868 ], 00:04:28.868 "driver_specific": { 00:04:28.868 "passthru": { 00:04:28.868 "name": "Passthru0", 00:04:28.868 "base_bdev_name": "Malloc0" 00:04:28.868 } 00:04:28.868 } 00:04:28.868 } 00:04:28.868 ]' 00:04:28.868 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.868 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.868 23:06:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.868 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.868 23:06:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.868 23:06:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.868 23:06:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.868 23:06:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.868 23:06:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.868 23:06:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.868 00:04:28.868 real 0m0.242s 00:04:28.868 user 0m0.128s 00:04:28.868 sys 0m0.030s 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.868 23:06:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 ************************************ 00:04:28.868 END TEST rpc_integrity 00:04:28.868 ************************************ 00:04:28.868 23:06:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.868 23:06:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.868 23:06:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.868 23:06:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 ************************************ 00:04:28.868 START TEST rpc_plugins 00:04:28.868 ************************************ 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:28.868 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.868 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.868 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.868 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.868 { 00:04:28.868 "name": "Malloc1", 00:04:28.868 "aliases": [ 00:04:28.868 "e44f7bc3-44f0-4e61-8857-9abe0392bd4b" 00:04:28.868 ], 00:04:28.868 "product_name": "Malloc disk", 00:04:28.868 "block_size": 4096, 00:04:28.868 "num_blocks": 256, 00:04:28.868 "uuid": "e44f7bc3-44f0-4e61-8857-9abe0392bd4b", 00:04:28.868 "assigned_rate_limits": { 00:04:28.868 "rw_ios_per_sec": 0, 00:04:28.868 "rw_mbytes_per_sec": 0, 00:04:28.868 "r_mbytes_per_sec": 0, 00:04:28.868 "w_mbytes_per_sec": 0 00:04:28.868 }, 00:04:28.868 "claimed": false, 00:04:28.868 "zoned": false, 00:04:28.868 "supported_io_types": { 00:04:28.868 "read": true, 00:04:28.868 "write": true, 00:04:28.868 "unmap": true, 00:04:28.868 "flush": true, 00:04:28.868 "reset": true, 00:04:28.868 "nvme_admin": false, 00:04:28.868 "nvme_io": false, 00:04:28.868 "nvme_io_md": false, 00:04:28.868 "write_zeroes": true, 00:04:28.868 "zcopy": true, 00:04:28.868 "get_zone_info": false, 00:04:28.868 "zone_management": false, 00:04:28.868 "zone_append": false, 00:04:28.868 "compare": false, 00:04:28.868 "compare_and_write": false, 00:04:28.868 "abort": true, 00:04:28.868 "seek_hole": false, 00:04:28.868 "seek_data": false, 00:04:28.868 "copy": true, 00:04:28.868 "nvme_iov_md": false 00:04:28.868 }, 00:04:28.868 "memory_domains": [ 00:04:28.868 { 00:04:28.868 "dma_device_id": "system", 00:04:28.868 "dma_device_type": 1 00:04:28.868 }, 00:04:28.868 { 00:04:28.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.868 "dma_device_type": 2 00:04:28.868 } 00:04:28.868 ], 00:04:28.869 "driver_specific": {} 00:04:28.869 } 00:04:28.869 ]' 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:28.869 23:06:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.869 00:04:28.869 real 0m0.108s 00:04:28.869 user 0m0.055s 00:04:28.869 sys 0m0.021s 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.869 23:06:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.869 ************************************ 00:04:28.869 END TEST rpc_plugins 00:04:28.869 ************************************ 00:04:29.128 23:06:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.128 23:06:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.128 23:06:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.128 23:06:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.128 ************************************ 00:04:29.128 START TEST rpc_trace_cmd_test 00:04:29.128 ************************************ 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:29.128 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57109", 00:04:29.128 "tpoint_group_mask": "0x8", 00:04:29.128 "iscsi_conn": { 00:04:29.128 "mask": "0x2", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "scsi": { 00:04:29.128 "mask": "0x4", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "bdev": { 00:04:29.128 "mask": "0x8", 00:04:29.128 "tpoint_mask": "0xffffffffffffffff" 00:04:29.128 }, 00:04:29.128 "nvmf_rdma": { 00:04:29.128 "mask": "0x10", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "nvmf_tcp": { 00:04:29.128 "mask": "0x20", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "ftl": { 00:04:29.128 "mask": "0x40", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "blobfs": { 00:04:29.128 "mask": "0x80", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "dsa": { 00:04:29.128 "mask": "0x200", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "thread": { 00:04:29.128 "mask": "0x400", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "nvme_pcie": { 00:04:29.128 "mask": "0x800", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "iaa": { 00:04:29.128 "mask": "0x1000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "nvme_tcp": { 00:04:29.128 "mask": "0x2000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "bdev_nvme": { 00:04:29.128 "mask": "0x4000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "sock": { 00:04:29.128 "mask": "0x8000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "blob": { 00:04:29.128 "mask": "0x10000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "bdev_raid": { 00:04:29.128 "mask": "0x20000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 }, 00:04:29.128 "scheduler": { 00:04:29.128 "mask": "0x40000", 00:04:29.128 "tpoint_mask": "0x0" 00:04:29.128 } 00:04:29.128 }' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:29.128 23:06:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:29.128 00:04:29.128 real 0m0.171s 00:04:29.128 user 0m0.131s 00:04:29.128 sys 0m0.030s 00:04:29.129 23:06:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.129 23:06:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.129 ************************************ 00:04:29.129 END TEST rpc_trace_cmd_test 00:04:29.129 ************************************ 00:04:29.129 23:06:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:29.129 23:06:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:29.129 23:06:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:29.129 23:06:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.129 23:06:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.129 23:06:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.129 ************************************ 00:04:29.129 START TEST rpc_daemon_integrity 00:04:29.129 ************************************ 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.129 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.387 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.388 { 00:04:29.388 "name": "Malloc2", 00:04:29.388 "aliases": [ 00:04:29.388 "103b7e15-bbf3-4b6f-8cc3-2777a0b9c002" 00:04:29.388 ], 00:04:29.388 "product_name": "Malloc disk", 00:04:29.388 "block_size": 512, 00:04:29.388 "num_blocks": 16384, 00:04:29.388 "uuid": "103b7e15-bbf3-4b6f-8cc3-2777a0b9c002", 00:04:29.388 "assigned_rate_limits": { 00:04:29.388 "rw_ios_per_sec": 0, 00:04:29.388 "rw_mbytes_per_sec": 0, 00:04:29.388 "r_mbytes_per_sec": 0, 00:04:29.388 "w_mbytes_per_sec": 0 00:04:29.388 }, 00:04:29.388 "claimed": false, 00:04:29.388 "zoned": false, 00:04:29.388 "supported_io_types": { 00:04:29.388 "read": true, 00:04:29.388 "write": true, 00:04:29.388 "unmap": true, 00:04:29.388 "flush": true, 00:04:29.388 "reset": true, 00:04:29.388 "nvme_admin": false, 00:04:29.388 "nvme_io": false, 00:04:29.388 "nvme_io_md": false, 00:04:29.388 "write_zeroes": true, 00:04:29.388 "zcopy": true, 00:04:29.388 "get_zone_info": false, 00:04:29.388 "zone_management": false, 00:04:29.388 "zone_append": false, 00:04:29.388 "compare": false, 00:04:29.388 "compare_and_write": false, 00:04:29.388 "abort": true, 00:04:29.388 "seek_hole": false, 00:04:29.388 "seek_data": false, 00:04:29.388 "copy": true, 00:04:29.388 "nvme_iov_md": false 00:04:29.388 }, 00:04:29.388 "memory_domains": [ 00:04:29.388 { 00:04:29.388 "dma_device_id": "system", 00:04:29.388 "dma_device_type": 1 00:04:29.388 }, 00:04:29.388 { 00:04:29.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.388 "dma_device_type": 2 00:04:29.388 } 00:04:29.388 ], 00:04:29.388 "driver_specific": {} 00:04:29.388 } 00:04:29.388 ]' 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 [2024-11-25 23:06:01.566071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.388 [2024-11-25 23:06:01.566124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.388 [2024-11-25 23:06:01.566142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:29.388 [2024-11-25 23:06:01.566153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.388 [2024-11-25 23:06:01.568247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.388 [2024-11-25 23:06:01.568283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.388 Passthru0 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.388 { 00:04:29.388 "name": "Malloc2", 00:04:29.388 "aliases": [ 00:04:29.388 "103b7e15-bbf3-4b6f-8cc3-2777a0b9c002" 00:04:29.388 ], 00:04:29.388 "product_name": "Malloc disk", 00:04:29.388 "block_size": 512, 00:04:29.388 "num_blocks": 16384, 00:04:29.388 "uuid": "103b7e15-bbf3-4b6f-8cc3-2777a0b9c002", 00:04:29.388 "assigned_rate_limits": { 00:04:29.388 "rw_ios_per_sec": 0, 00:04:29.388 "rw_mbytes_per_sec": 0, 00:04:29.388 "r_mbytes_per_sec": 0, 00:04:29.388 "w_mbytes_per_sec": 0 00:04:29.388 }, 00:04:29.388 "claimed": true, 00:04:29.388 "claim_type": "exclusive_write", 00:04:29.388 "zoned": false, 00:04:29.388 "supported_io_types": { 00:04:29.388 "read": true, 00:04:29.388 "write": true, 00:04:29.388 "unmap": true, 00:04:29.388 "flush": true, 00:04:29.388 "reset": true, 00:04:29.388 "nvme_admin": false, 00:04:29.388 "nvme_io": false, 00:04:29.388 "nvme_io_md": false, 00:04:29.388 "write_zeroes": true, 00:04:29.388 "zcopy": true, 00:04:29.388 "get_zone_info": false, 00:04:29.388 "zone_management": false, 00:04:29.388 "zone_append": false, 00:04:29.388 "compare": false, 00:04:29.388 "compare_and_write": false, 00:04:29.388 "abort": true, 00:04:29.388 "seek_hole": false, 00:04:29.388 "seek_data": false, 00:04:29.388 "copy": true, 00:04:29.388 "nvme_iov_md": false 00:04:29.388 }, 00:04:29.388 "memory_domains": [ 00:04:29.388 { 00:04:29.388 "dma_device_id": "system", 00:04:29.388 "dma_device_type": 1 00:04:29.388 }, 00:04:29.388 { 00:04:29.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.388 "dma_device_type": 2 00:04:29.388 } 00:04:29.388 ], 00:04:29.388 "driver_specific": {} 00:04:29.388 }, 00:04:29.388 { 00:04:29.388 "name": "Passthru0", 00:04:29.388 "aliases": [ 00:04:29.388 "85ab3d2f-8d51-5cb2-bbd3-14c40d086a5d" 00:04:29.388 ], 00:04:29.388 "product_name": "passthru", 00:04:29.388 "block_size": 512, 00:04:29.388 "num_blocks": 16384, 00:04:29.388 "uuid": "85ab3d2f-8d51-5cb2-bbd3-14c40d086a5d", 00:04:29.388 "assigned_rate_limits": { 00:04:29.388 "rw_ios_per_sec": 0, 00:04:29.388 "rw_mbytes_per_sec": 0, 00:04:29.388 "r_mbytes_per_sec": 0, 00:04:29.388 "w_mbytes_per_sec": 0 00:04:29.388 }, 00:04:29.388 "claimed": false, 00:04:29.388 "zoned": false, 00:04:29.388 "supported_io_types": { 00:04:29.388 "read": true, 00:04:29.388 "write": true, 00:04:29.388 "unmap": true, 00:04:29.388 "flush": true, 00:04:29.388 "reset": true, 00:04:29.388 "nvme_admin": false, 00:04:29.388 "nvme_io": false, 00:04:29.388 "nvme_io_md": false, 00:04:29.388 "write_zeroes": true, 00:04:29.388 "zcopy": true, 00:04:29.388 "get_zone_info": false, 00:04:29.388 "zone_management": false, 00:04:29.388 "zone_append": false, 00:04:29.388 "compare": false, 00:04:29.388 "compare_and_write": false, 00:04:29.388 "abort": true, 00:04:29.388 "seek_hole": false, 00:04:29.388 "seek_data": false, 00:04:29.388 "copy": true, 00:04:29.388 "nvme_iov_md": false 00:04:29.388 }, 00:04:29.388 "memory_domains": [ 00:04:29.388 { 00:04:29.388 "dma_device_id": "system", 00:04:29.388 "dma_device_type": 1 00:04:29.388 }, 00:04:29.388 { 00:04:29.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.388 "dma_device_type": 2 00:04:29.388 } 00:04:29.388 ], 00:04:29.388 "driver_specific": { 00:04:29.388 "passthru": { 00:04:29.388 "name": "Passthru0", 00:04:29.388 "base_bdev_name": "Malloc2" 00:04:29.388 } 00:04:29.388 } 00:04:29.388 } 00:04:29.388 ]' 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.388 00:04:29.388 real 0m0.227s 00:04:29.388 user 0m0.127s 00:04:29.388 sys 0m0.025s 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.388 23:06:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.388 ************************************ 00:04:29.388 END TEST rpc_daemon_integrity 00:04:29.388 ************************************ 00:04:29.388 23:06:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:29.388 23:06:01 rpc -- rpc/rpc.sh@84 -- # killprocess 57109 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 57109 ']' 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@958 -- # kill -0 57109 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57109 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.388 killing process with pid 57109 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57109' 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@973 -- # kill 57109 00:04:29.388 23:06:01 rpc -- common/autotest_common.sh@978 -- # wait 57109 00:04:31.295 00:04:31.295 real 0m3.488s 00:04:31.295 user 0m3.861s 00:04:31.295 sys 0m0.604s 00:04:31.295 23:06:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.295 ************************************ 00:04:31.295 END TEST rpc 00:04:31.295 ************************************ 00:04:31.295 23:06:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.295 23:06:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.295 23:06:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.295 23:06:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.295 23:06:03 -- common/autotest_common.sh@10 -- # set +x 00:04:31.295 ************************************ 00:04:31.295 START TEST skip_rpc 00:04:31.295 ************************************ 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.295 * Looking for test storage... 00:04:31.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.295 23:06:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.295 23:06:03 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.295 --rc genhtml_branch_coverage=1 00:04:31.295 --rc genhtml_function_coverage=1 00:04:31.295 --rc genhtml_legend=1 00:04:31.295 --rc geninfo_all_blocks=1 00:04:31.295 --rc geninfo_unexecuted_blocks=1 00:04:31.295 00:04:31.296 ' 00:04:31.296 23:06:03 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.296 --rc genhtml_branch_coverage=1 00:04:31.296 --rc genhtml_function_coverage=1 00:04:31.296 --rc genhtml_legend=1 00:04:31.296 --rc geninfo_all_blocks=1 00:04:31.296 --rc geninfo_unexecuted_blocks=1 00:04:31.296 00:04:31.296 ' 00:04:31.296 23:06:03 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.296 --rc genhtml_branch_coverage=1 00:04:31.296 --rc genhtml_function_coverage=1 00:04:31.296 --rc genhtml_legend=1 00:04:31.296 --rc geninfo_all_blocks=1 00:04:31.296 --rc geninfo_unexecuted_blocks=1 00:04:31.296 00:04:31.296 ' 00:04:31.296 23:06:03 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.296 --rc genhtml_branch_coverage=1 00:04:31.296 --rc genhtml_function_coverage=1 00:04:31.296 --rc genhtml_legend=1 00:04:31.296 --rc geninfo_all_blocks=1 00:04:31.296 --rc geninfo_unexecuted_blocks=1 00:04:31.296 00:04:31.296 ' 00:04:31.296 23:06:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.296 23:06:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.296 23:06:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:31.296 23:06:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.296 23:06:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.296 23:06:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.296 ************************************ 00:04:31.296 START TEST skip_rpc 00:04:31.296 ************************************ 00:04:31.296 23:06:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:31.296 23:06:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57322 00:04:31.296 23:06:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.296 23:06:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.296 23:06:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.296 [2024-11-25 23:06:03.544221] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:31.296 [2024-11-25 23:06:03.544346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57322 ] 00:04:31.556 [2024-11-25 23:06:03.700109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.556 [2024-11-25 23:06:03.794507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57322 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57322 ']' 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57322 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57322 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.865 killing process with pid 57322 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57322' 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57322 00:04:36.865 23:06:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57322 00:04:37.436 00:04:37.436 real 0m6.297s 00:04:37.436 user 0m5.868s 00:04:37.436 sys 0m0.322s 00:04:37.436 23:06:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.436 ************************************ 00:04:37.436 END TEST skip_rpc 00:04:37.436 ************************************ 00:04:37.436 23:06:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.695 23:06:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:37.695 23:06:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.695 23:06:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.695 23:06:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.695 ************************************ 00:04:37.695 START TEST skip_rpc_with_json 00:04:37.695 ************************************ 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57415 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57415 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57415 ']' 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.695 23:06:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.695 [2024-11-25 23:06:09.896347] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:37.695 [2024-11-25 23:06:09.896458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57415 ] 00:04:37.695 [2024-11-25 23:06:10.053668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.955 [2024-11-25 23:06:10.174020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.527 [2024-11-25 23:06:10.816258] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:38.527 request: 00:04:38.527 { 00:04:38.527 "trtype": "tcp", 00:04:38.527 "method": "nvmf_get_transports", 00:04:38.527 "req_id": 1 00:04:38.527 } 00:04:38.527 Got JSON-RPC error response 00:04:38.527 response: 00:04:38.527 { 00:04:38.527 "code": -19, 00:04:38.527 "message": "No such device" 00:04:38.527 } 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.527 [2024-11-25 23:06:10.828377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.527 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.789 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.789 23:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.789 { 00:04:38.789 "subsystems": [ 00:04:38.789 { 00:04:38.789 "subsystem": "fsdev", 00:04:38.789 "config": [ 00:04:38.789 { 00:04:38.789 "method": "fsdev_set_opts", 00:04:38.789 "params": { 00:04:38.789 "fsdev_io_pool_size": 65535, 00:04:38.789 "fsdev_io_cache_size": 256 00:04:38.789 } 00:04:38.789 } 00:04:38.789 ] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "keyring", 00:04:38.789 "config": [] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "iobuf", 00:04:38.789 "config": [ 00:04:38.789 { 00:04:38.789 "method": "iobuf_set_options", 00:04:38.789 "params": { 00:04:38.789 "small_pool_count": 8192, 00:04:38.789 "large_pool_count": 1024, 00:04:38.789 "small_bufsize": 8192, 00:04:38.789 "large_bufsize": 135168, 00:04:38.789 "enable_numa": false 00:04:38.789 } 00:04:38.789 } 00:04:38.789 ] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "sock", 00:04:38.789 "config": [ 00:04:38.789 { 00:04:38.789 "method": "sock_set_default_impl", 00:04:38.789 "params": { 00:04:38.789 "impl_name": "posix" 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "sock_impl_set_options", 00:04:38.789 "params": { 00:04:38.789 "impl_name": "ssl", 00:04:38.789 "recv_buf_size": 4096, 00:04:38.789 "send_buf_size": 4096, 00:04:38.789 "enable_recv_pipe": true, 00:04:38.789 "enable_quickack": false, 00:04:38.789 "enable_placement_id": 0, 00:04:38.789 "enable_zerocopy_send_server": true, 00:04:38.789 "enable_zerocopy_send_client": false, 00:04:38.789 "zerocopy_threshold": 0, 00:04:38.789 "tls_version": 0, 00:04:38.789 "enable_ktls": false 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "sock_impl_set_options", 00:04:38.789 "params": { 00:04:38.789 "impl_name": "posix", 00:04:38.789 "recv_buf_size": 2097152, 00:04:38.789 "send_buf_size": 2097152, 00:04:38.789 "enable_recv_pipe": true, 00:04:38.789 "enable_quickack": false, 00:04:38.789 "enable_placement_id": 0, 00:04:38.789 "enable_zerocopy_send_server": true, 00:04:38.789 "enable_zerocopy_send_client": false, 00:04:38.789 "zerocopy_threshold": 0, 00:04:38.789 "tls_version": 0, 00:04:38.789 "enable_ktls": false 00:04:38.789 } 00:04:38.789 } 00:04:38.789 ] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "vmd", 00:04:38.789 "config": [] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "accel", 00:04:38.789 "config": [ 00:04:38.789 { 00:04:38.789 "method": "accel_set_options", 00:04:38.789 "params": { 00:04:38.789 "small_cache_size": 128, 00:04:38.789 "large_cache_size": 16, 00:04:38.789 "task_count": 2048, 00:04:38.789 "sequence_count": 2048, 00:04:38.789 "buf_count": 2048 00:04:38.789 } 00:04:38.789 } 00:04:38.789 ] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "bdev", 00:04:38.789 "config": [ 00:04:38.789 { 00:04:38.789 "method": "bdev_set_options", 00:04:38.789 "params": { 00:04:38.789 "bdev_io_pool_size": 65535, 00:04:38.789 "bdev_io_cache_size": 256, 00:04:38.789 "bdev_auto_examine": true, 00:04:38.789 "iobuf_small_cache_size": 128, 00:04:38.789 "iobuf_large_cache_size": 16 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "bdev_raid_set_options", 00:04:38.789 "params": { 00:04:38.789 "process_window_size_kb": 1024, 00:04:38.789 "process_max_bandwidth_mb_sec": 0 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "bdev_iscsi_set_options", 00:04:38.789 "params": { 00:04:38.789 "timeout_sec": 30 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "bdev_nvme_set_options", 00:04:38.789 "params": { 00:04:38.789 "action_on_timeout": "none", 00:04:38.789 "timeout_us": 0, 00:04:38.789 "timeout_admin_us": 0, 00:04:38.789 "keep_alive_timeout_ms": 10000, 00:04:38.789 "arbitration_burst": 0, 00:04:38.789 "low_priority_weight": 0, 00:04:38.789 "medium_priority_weight": 0, 00:04:38.789 "high_priority_weight": 0, 00:04:38.789 "nvme_adminq_poll_period_us": 10000, 00:04:38.789 "nvme_ioq_poll_period_us": 0, 00:04:38.789 "io_queue_requests": 0, 00:04:38.789 "delay_cmd_submit": true, 00:04:38.789 "transport_retry_count": 4, 00:04:38.789 "bdev_retry_count": 3, 00:04:38.789 "transport_ack_timeout": 0, 00:04:38.789 "ctrlr_loss_timeout_sec": 0, 00:04:38.789 "reconnect_delay_sec": 0, 00:04:38.789 "fast_io_fail_timeout_sec": 0, 00:04:38.789 "disable_auto_failback": false, 00:04:38.789 "generate_uuids": false, 00:04:38.789 "transport_tos": 0, 00:04:38.789 "nvme_error_stat": false, 00:04:38.789 "rdma_srq_size": 0, 00:04:38.789 "io_path_stat": false, 00:04:38.789 "allow_accel_sequence": false, 00:04:38.789 "rdma_max_cq_size": 0, 00:04:38.789 "rdma_cm_event_timeout_ms": 0, 00:04:38.789 "dhchap_digests": [ 00:04:38.789 "sha256", 00:04:38.789 "sha384", 00:04:38.789 "sha512" 00:04:38.789 ], 00:04:38.789 "dhchap_dhgroups": [ 00:04:38.789 "null", 00:04:38.789 "ffdhe2048", 00:04:38.789 "ffdhe3072", 00:04:38.789 "ffdhe4096", 00:04:38.789 "ffdhe6144", 00:04:38.789 "ffdhe8192" 00:04:38.789 ] 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "bdev_nvme_set_hotplug", 00:04:38.789 "params": { 00:04:38.789 "period_us": 100000, 00:04:38.789 "enable": false 00:04:38.789 } 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "method": "bdev_wait_for_examine" 00:04:38.789 } 00:04:38.789 ] 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "scsi", 00:04:38.789 "config": null 00:04:38.789 }, 00:04:38.789 { 00:04:38.789 "subsystem": "scheduler", 00:04:38.789 "config": [ 00:04:38.789 { 00:04:38.789 "method": "framework_set_scheduler", 00:04:38.789 "params": { 00:04:38.789 "name": "static" 00:04:38.790 } 00:04:38.790 } 00:04:38.790 ] 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "subsystem": "vhost_scsi", 00:04:38.790 "config": [] 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "subsystem": "vhost_blk", 00:04:38.790 "config": [] 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "subsystem": "ublk", 00:04:38.790 "config": [] 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "subsystem": "nbd", 00:04:38.790 "config": [] 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "subsystem": "nvmf", 00:04:38.790 "config": [ 00:04:38.790 { 00:04:38.790 "method": "nvmf_set_config", 00:04:38.790 "params": { 00:04:38.790 "discovery_filter": "match_any", 00:04:38.790 "admin_cmd_passthru": { 00:04:38.790 "identify_ctrlr": false 00:04:38.790 }, 00:04:38.790 "dhchap_digests": [ 00:04:38.790 "sha256", 00:04:38.790 "sha384", 00:04:38.790 "sha512" 00:04:38.790 ], 00:04:38.790 "dhchap_dhgroups": [ 00:04:38.790 "null", 00:04:38.790 "ffdhe2048", 00:04:38.790 "ffdhe3072", 00:04:38.790 "ffdhe4096", 00:04:38.790 "ffdhe6144", 00:04:38.790 "ffdhe8192" 00:04:38.790 ] 00:04:38.790 } 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "method": "nvmf_set_max_subsystems", 00:04:38.790 "params": { 00:04:38.790 "max_subsystems": 1024 00:04:38.790 } 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "method": "nvmf_set_crdt", 00:04:38.790 "params": { 00:04:38.790 "crdt1": 0, 00:04:38.790 "crdt2": 0, 00:04:38.790 "crdt3": 0 00:04:38.790 } 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "method": "nvmf_create_transport", 00:04:38.790 "params": { 00:04:38.790 "trtype": "TCP", 00:04:38.790 "max_queue_depth": 128, 00:04:38.790 "max_io_qpairs_per_ctrlr": 127, 00:04:38.790 "in_capsule_data_size": 4096, 00:04:38.790 "max_io_size": 131072, 00:04:38.790 "io_unit_size": 131072, 00:04:38.790 "max_aq_depth": 128, 00:04:38.790 "num_shared_buffers": 511, 00:04:38.790 "buf_cache_size": 4294967295, 00:04:38.790 "dif_insert_or_strip": false, 00:04:38.790 "zcopy": false, 00:04:38.790 "c2h_success": true, 00:04:38.790 "sock_priority": 0, 00:04:38.790 "abort_timeout_sec": 1, 00:04:38.790 "ack_timeout": 0, 00:04:38.790 "data_wr_pool_size": 0 00:04:38.790 } 00:04:38.790 } 00:04:38.790 ] 00:04:38.790 }, 00:04:38.790 { 00:04:38.790 "subsystem": "iscsi", 00:04:38.790 "config": [ 00:04:38.790 { 00:04:38.790 "method": "iscsi_set_options", 00:04:38.790 "params": { 00:04:38.790 "node_base": "iqn.2016-06.io.spdk", 00:04:38.790 "max_sessions": 128, 00:04:38.790 "max_connections_per_session": 2, 00:04:38.790 "max_queue_depth": 64, 00:04:38.790 "default_time2wait": 2, 00:04:38.790 "default_time2retain": 20, 00:04:38.790 "first_burst_length": 8192, 00:04:38.790 "immediate_data": true, 00:04:38.790 "allow_duplicated_isid": false, 00:04:38.790 "error_recovery_level": 0, 00:04:38.790 "nop_timeout": 60, 00:04:38.790 "nop_in_interval": 30, 00:04:38.790 "disable_chap": false, 00:04:38.790 "require_chap": false, 00:04:38.790 "mutual_chap": false, 00:04:38.790 "chap_group": 0, 00:04:38.790 "max_large_datain_per_connection": 64, 00:04:38.790 "max_r2t_per_connection": 4, 00:04:38.790 "pdu_pool_size": 36864, 00:04:38.790 "immediate_data_pool_size": 16384, 00:04:38.790 "data_out_pool_size": 2048 00:04:38.790 } 00:04:38.790 } 00:04:38.790 ] 00:04:38.790 } 00:04:38.790 ] 00:04:38.790 } 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57415 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57415 ']' 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57415 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.790 23:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57415 00:04:38.790 23:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.790 23:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.790 killing process with pid 57415 00:04:38.790 23:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57415' 00:04:38.790 23:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57415 00:04:38.790 23:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57415 00:04:40.174 23:06:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57454 00:04:40.174 23:06:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.174 23:06:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57454 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57454 ']' 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57454 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57454 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.515 killing process with pid 57454 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57454' 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57454 00:04:45.515 23:06:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57454 00:04:46.468 23:06:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.468 23:06:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.468 00:04:46.468 real 0m8.968s 00:04:46.468 user 0m8.475s 00:04:46.468 sys 0m0.705s 00:04:46.468 23:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.468 23:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.468 ************************************ 00:04:46.468 END TEST skip_rpc_with_json 00:04:46.468 ************************************ 00:04:46.468 23:06:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:46.468 23:06:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.468 23:06:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.468 23:06:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.728 ************************************ 00:04:46.728 START TEST skip_rpc_with_delay 00:04:46.728 ************************************ 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.728 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.729 [2024-11-25 23:06:18.907368] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.729 00:04:46.729 real 0m0.108s 00:04:46.729 user 0m0.057s 00:04:46.729 sys 0m0.050s 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.729 ************************************ 00:04:46.729 END TEST skip_rpc_with_delay 00:04:46.729 ************************************ 00:04:46.729 23:06:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:46.729 23:06:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:46.729 23:06:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:46.729 23:06:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:46.729 23:06:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.729 23:06:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.729 23:06:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.729 ************************************ 00:04:46.729 START TEST exit_on_failed_rpc_init 00:04:46.729 ************************************ 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57577 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57577 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57577 ']' 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.729 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.729 [2024-11-25 23:06:19.082621] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:46.729 [2024-11-25 23:06:19.082742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57577 ] 00:04:46.987 [2024-11-25 23:06:19.237781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.987 [2024-11-25 23:06:19.314020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.553 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:47.810 23:06:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.810 [2024-11-25 23:06:19.999089] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:47.810 [2024-11-25 23:06:19.999205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57595 ] 00:04:47.810 [2024-11-25 23:06:20.151854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.067 [2024-11-25 23:06:20.242825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.067 [2024-11-25 23:06:20.242899] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.067 [2024-11-25 23:06:20.242913] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.067 [2024-11-25 23:06:20.242925] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57577 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57577 ']' 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57577 00:04:48.067 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57577 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.324 killing process with pid 57577 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57577' 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57577 00:04:48.324 23:06:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57577 00:04:49.258 00:04:49.258 real 0m2.596s 00:04:49.258 user 0m2.885s 00:04:49.258 sys 0m0.418s 00:04:49.258 23:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.258 ************************************ 00:04:49.258 END TEST exit_on_failed_rpc_init 00:04:49.258 ************************************ 00:04:49.258 23:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.518 23:06:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.518 00:04:49.518 real 0m18.339s 00:04:49.518 user 0m17.421s 00:04:49.518 sys 0m1.678s 00:04:49.518 23:06:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.518 23:06:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.518 ************************************ 00:04:49.518 END TEST skip_rpc 00:04:49.518 ************************************ 00:04:49.518 23:06:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.518 23:06:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.518 23:06:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.518 23:06:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.518 ************************************ 00:04:49.518 START TEST rpc_client 00:04:49.518 ************************************ 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.518 * Looking for test storage... 00:04:49.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.518 23:06:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.518 --rc genhtml_branch_coverage=1 00:04:49.518 --rc genhtml_function_coverage=1 00:04:49.518 --rc genhtml_legend=1 00:04:49.518 --rc geninfo_all_blocks=1 00:04:49.518 --rc geninfo_unexecuted_blocks=1 00:04:49.518 00:04:49.518 ' 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.518 --rc genhtml_branch_coverage=1 00:04:49.518 --rc genhtml_function_coverage=1 00:04:49.518 --rc genhtml_legend=1 00:04:49.518 --rc geninfo_all_blocks=1 00:04:49.518 --rc geninfo_unexecuted_blocks=1 00:04:49.518 00:04:49.518 ' 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.518 --rc genhtml_branch_coverage=1 00:04:49.518 --rc genhtml_function_coverage=1 00:04:49.518 --rc genhtml_legend=1 00:04:49.518 --rc geninfo_all_blocks=1 00:04:49.518 --rc geninfo_unexecuted_blocks=1 00:04:49.518 00:04:49.518 ' 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.518 --rc genhtml_branch_coverage=1 00:04:49.518 --rc genhtml_function_coverage=1 00:04:49.518 --rc genhtml_legend=1 00:04:49.518 --rc geninfo_all_blocks=1 00:04:49.518 --rc geninfo_unexecuted_blocks=1 00:04:49.518 00:04:49.518 ' 00:04:49.518 23:06:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:49.518 OK 00:04:49.518 23:06:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:49.518 00:04:49.518 real 0m0.176s 00:04:49.518 user 0m0.108s 00:04:49.518 sys 0m0.074s 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.518 ************************************ 00:04:49.518 END TEST rpc_client 00:04:49.518 ************************************ 00:04:49.518 23:06:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:49.778 23:06:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:49.778 23:06:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.778 23:06:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.778 23:06:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.778 ************************************ 00:04:49.778 START TEST json_config 00:04:49.778 ************************************ 00:04:49.778 23:06:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:49.778 23:06:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.778 23:06:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.778 23:06:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.778 23:06:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.778 23:06:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.778 23:06:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.778 23:06:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.778 23:06:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.778 23:06:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:49.778 23:06:22 json_config -- scripts/common.sh@345 -- # : 1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.778 23:06:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.778 23:06:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@353 -- # local d=1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.778 23:06:22 json_config -- scripts/common.sh@355 -- # echo 1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.778 23:06:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@353 -- # local d=2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.778 23:06:22 json_config -- scripts/common.sh@355 -- # echo 2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.778 23:06:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.778 23:06:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.778 23:06:22 json_config -- scripts/common.sh@368 -- # return 0 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.778 --rc genhtml_branch_coverage=1 00:04:49.778 --rc genhtml_function_coverage=1 00:04:49.778 --rc genhtml_legend=1 00:04:49.778 --rc geninfo_all_blocks=1 00:04:49.778 --rc geninfo_unexecuted_blocks=1 00:04:49.778 00:04:49.778 ' 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.778 --rc genhtml_branch_coverage=1 00:04:49.778 --rc genhtml_function_coverage=1 00:04:49.778 --rc genhtml_legend=1 00:04:49.778 --rc geninfo_all_blocks=1 00:04:49.778 --rc geninfo_unexecuted_blocks=1 00:04:49.778 00:04:49.778 ' 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.778 --rc genhtml_branch_coverage=1 00:04:49.778 --rc genhtml_function_coverage=1 00:04:49.778 --rc genhtml_legend=1 00:04:49.778 --rc geninfo_all_blocks=1 00:04:49.778 --rc geninfo_unexecuted_blocks=1 00:04:49.778 00:04:49.778 ' 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.778 --rc genhtml_branch_coverage=1 00:04:49.778 --rc genhtml_function_coverage=1 00:04:49.778 --rc genhtml_legend=1 00:04:49.778 --rc geninfo_all_blocks=1 00:04:49.778 --rc geninfo_unexecuted_blocks=1 00:04:49.778 00:04:49.778 ' 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:70c0241f-21df-45be-86df-ddce1c85fb81 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=70c0241f-21df-45be-86df-ddce1c85fb81 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.778 23:06:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.778 23:06:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.778 23:06:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.778 23:06:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.778 23:06:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.778 23:06:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.778 23:06:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.778 23:06:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:49.778 23:06:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@51 -- # : 0 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.778 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.778 23:06:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.778 WARNING: No tests are enabled so not running JSON configuration tests 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:49.778 23:06:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:49.778 00:04:49.778 real 0m0.141s 00:04:49.778 user 0m0.092s 00:04:49.778 sys 0m0.050s 00:04:49.778 ************************************ 00:04:49.778 END TEST json_config 00:04:49.778 ************************************ 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.778 23:06:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.778 23:06:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.778 23:06:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.778 23:06:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.778 23:06:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.778 ************************************ 00:04:49.778 START TEST json_config_extra_key 00:04:49.778 ************************************ 00:04:49.778 23:06:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.037 23:06:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.037 --rc genhtml_branch_coverage=1 00:04:50.037 --rc genhtml_function_coverage=1 00:04:50.037 --rc genhtml_legend=1 00:04:50.037 --rc geninfo_all_blocks=1 00:04:50.037 --rc geninfo_unexecuted_blocks=1 00:04:50.037 00:04:50.037 ' 00:04:50.037 23:06:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.037 --rc genhtml_branch_coverage=1 00:04:50.037 --rc genhtml_function_coverage=1 00:04:50.038 --rc genhtml_legend=1 00:04:50.038 --rc geninfo_all_blocks=1 00:04:50.038 --rc geninfo_unexecuted_blocks=1 00:04:50.038 00:04:50.038 ' 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.038 --rc genhtml_branch_coverage=1 00:04:50.038 --rc genhtml_function_coverage=1 00:04:50.038 --rc genhtml_legend=1 00:04:50.038 --rc geninfo_all_blocks=1 00:04:50.038 --rc geninfo_unexecuted_blocks=1 00:04:50.038 00:04:50.038 ' 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.038 --rc genhtml_branch_coverage=1 00:04:50.038 --rc genhtml_function_coverage=1 00:04:50.038 --rc genhtml_legend=1 00:04:50.038 --rc geninfo_all_blocks=1 00:04:50.038 --rc geninfo_unexecuted_blocks=1 00:04:50.038 00:04:50.038 ' 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:70c0241f-21df-45be-86df-ddce1c85fb81 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=70c0241f-21df-45be-86df-ddce1c85fb81 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.038 23:06:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.038 23:06:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.038 23:06:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.038 23:06:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.038 23:06:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.038 23:06:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.038 23:06:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.038 23:06:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.038 23:06:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.038 23:06:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.038 INFO: launching applications... 00:04:50.038 23:06:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.038 Waiting for target to run... 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57787 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57787 /var/tmp/spdk_tgt.sock 00:04:50.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57787 ']' 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.038 23:06:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.038 23:06:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.038 [2024-11-25 23:06:22.340275] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:50.038 [2024-11-25 23:06:22.340767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57787 ] 00:04:50.299 [2024-11-25 23:06:22.655954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.558 [2024-11-25 23:06:22.749620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.131 23:06:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.131 23:06:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.131 00:04:51.131 23:06:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.131 INFO: shutting down applications... 00:04:51.131 23:06:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57787 ]] 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57787 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57787 00:04:51.131 23:06:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.393 23:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.393 23:06:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.393 23:06:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57787 00:04:51.393 23:06:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.966 23:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.966 23:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.966 23:06:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57787 00:04:51.966 23:06:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.538 23:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.538 23:06:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.538 23:06:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57787 00:04:52.538 23:06:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57787 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.110 SPDK target shutdown done 00:04:53.110 Success 00:04:53.110 23:06:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.110 23:06:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.110 00:04:53.110 real 0m3.125s 00:04:53.110 user 0m2.697s 00:04:53.110 sys 0m0.380s 00:04:53.110 23:06:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.110 23:06:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.110 ************************************ 00:04:53.110 END TEST json_config_extra_key 00:04:53.110 ************************************ 00:04:53.110 23:06:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.110 23:06:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.110 23:06:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.110 23:06:25 -- common/autotest_common.sh@10 -- # set +x 00:04:53.110 ************************************ 00:04:53.110 START TEST alias_rpc 00:04:53.110 ************************************ 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.110 * Looking for test storage... 00:04:53.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.110 23:06:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.110 23:06:25 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.110 --rc genhtml_branch_coverage=1 00:04:53.110 --rc genhtml_function_coverage=1 00:04:53.110 --rc genhtml_legend=1 00:04:53.110 --rc geninfo_all_blocks=1 00:04:53.110 --rc geninfo_unexecuted_blocks=1 00:04:53.111 00:04:53.111 ' 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.111 --rc genhtml_branch_coverage=1 00:04:53.111 --rc genhtml_function_coverage=1 00:04:53.111 --rc genhtml_legend=1 00:04:53.111 --rc geninfo_all_blocks=1 00:04:53.111 --rc geninfo_unexecuted_blocks=1 00:04:53.111 00:04:53.111 ' 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.111 --rc genhtml_branch_coverage=1 00:04:53.111 --rc genhtml_function_coverage=1 00:04:53.111 --rc genhtml_legend=1 00:04:53.111 --rc geninfo_all_blocks=1 00:04:53.111 --rc geninfo_unexecuted_blocks=1 00:04:53.111 00:04:53.111 ' 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.111 --rc genhtml_branch_coverage=1 00:04:53.111 --rc genhtml_function_coverage=1 00:04:53.111 --rc genhtml_legend=1 00:04:53.111 --rc geninfo_all_blocks=1 00:04:53.111 --rc geninfo_unexecuted_blocks=1 00:04:53.111 00:04:53.111 ' 00:04:53.111 23:06:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.111 23:06:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57883 00:04:53.111 23:06:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57883 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57883 ']' 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.111 23:06:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.111 23:06:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.371 [2024-11-25 23:06:25.522476] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:53.371 [2024-11-25 23:06:25.522597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57883 ] 00:04:53.371 [2024-11-25 23:06:25.679242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.633 [2024-11-25 23:06:25.775962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.203 23:06:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.203 23:06:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.203 23:06:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:54.464 23:06:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57883 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57883 ']' 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57883 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57883 00:04:54.464 killing process with pid 57883 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57883' 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57883 00:04:54.464 23:06:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57883 00:04:55.856 00:04:55.856 real 0m2.799s 00:04:55.856 user 0m2.906s 00:04:55.856 sys 0m0.385s 00:04:55.856 23:06:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.856 ************************************ 00:04:55.856 END TEST alias_rpc 00:04:55.856 ************************************ 00:04:55.856 23:06:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.856 23:06:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:55.856 23:06:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:55.856 23:06:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.856 23:06:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.856 23:06:28 -- common/autotest_common.sh@10 -- # set +x 00:04:55.856 ************************************ 00:04:55.856 START TEST spdkcli_tcp 00:04:55.856 ************************************ 00:04:55.856 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.115 * Looking for test storage... 00:04:56.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.115 23:06:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.115 --rc genhtml_branch_coverage=1 00:04:56.115 --rc genhtml_function_coverage=1 00:04:56.115 --rc genhtml_legend=1 00:04:56.115 --rc geninfo_all_blocks=1 00:04:56.115 --rc geninfo_unexecuted_blocks=1 00:04:56.115 00:04:56.115 ' 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.115 --rc genhtml_branch_coverage=1 00:04:56.115 --rc genhtml_function_coverage=1 00:04:56.115 --rc genhtml_legend=1 00:04:56.115 --rc geninfo_all_blocks=1 00:04:56.115 --rc geninfo_unexecuted_blocks=1 00:04:56.115 00:04:56.115 ' 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.115 --rc genhtml_branch_coverage=1 00:04:56.115 --rc genhtml_function_coverage=1 00:04:56.115 --rc genhtml_legend=1 00:04:56.115 --rc geninfo_all_blocks=1 00:04:56.115 --rc geninfo_unexecuted_blocks=1 00:04:56.115 00:04:56.115 ' 00:04:56.115 23:06:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.115 --rc genhtml_branch_coverage=1 00:04:56.115 --rc genhtml_function_coverage=1 00:04:56.115 --rc genhtml_legend=1 00:04:56.115 --rc geninfo_all_blocks=1 00:04:56.115 --rc geninfo_unexecuted_blocks=1 00:04:56.115 00:04:56.115 ' 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57974 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57974 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57974 ']' 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.116 23:06:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.116 23:06:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.116 [2024-11-25 23:06:28.404423] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:56.116 [2024-11-25 23:06:28.404988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:04:56.374 [2024-11-25 23:06:28.560219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.374 [2024-11-25 23:06:28.639192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.374 [2024-11-25 23:06:28.639320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.947 23:06:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.947 23:06:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:56.947 23:06:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57991 00:04:56.947 23:06:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.947 23:06:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.206 [ 00:04:57.206 "bdev_malloc_delete", 00:04:57.206 "bdev_malloc_create", 00:04:57.206 "bdev_null_resize", 00:04:57.206 "bdev_null_delete", 00:04:57.206 "bdev_null_create", 00:04:57.206 "bdev_nvme_cuse_unregister", 00:04:57.206 "bdev_nvme_cuse_register", 00:04:57.206 "bdev_opal_new_user", 00:04:57.206 "bdev_opal_set_lock_state", 00:04:57.206 "bdev_opal_delete", 00:04:57.206 "bdev_opal_get_info", 00:04:57.206 "bdev_opal_create", 00:04:57.206 "bdev_nvme_opal_revert", 00:04:57.206 "bdev_nvme_opal_init", 00:04:57.206 "bdev_nvme_send_cmd", 00:04:57.206 "bdev_nvme_set_keys", 00:04:57.206 "bdev_nvme_get_path_iostat", 00:04:57.206 "bdev_nvme_get_mdns_discovery_info", 00:04:57.206 "bdev_nvme_stop_mdns_discovery", 00:04:57.206 "bdev_nvme_start_mdns_discovery", 00:04:57.206 "bdev_nvme_set_multipath_policy", 00:04:57.206 "bdev_nvme_set_preferred_path", 00:04:57.206 "bdev_nvme_get_io_paths", 00:04:57.206 "bdev_nvme_remove_error_injection", 00:04:57.206 "bdev_nvme_add_error_injection", 00:04:57.206 "bdev_nvme_get_discovery_info", 00:04:57.206 "bdev_nvme_stop_discovery", 00:04:57.206 "bdev_nvme_start_discovery", 00:04:57.206 "bdev_nvme_get_controller_health_info", 00:04:57.206 "bdev_nvme_disable_controller", 00:04:57.206 "bdev_nvme_enable_controller", 00:04:57.206 "bdev_nvme_reset_controller", 00:04:57.206 "bdev_nvme_get_transport_statistics", 00:04:57.206 "bdev_nvme_apply_firmware", 00:04:57.206 "bdev_nvme_detach_controller", 00:04:57.206 "bdev_nvme_get_controllers", 00:04:57.206 "bdev_nvme_attach_controller", 00:04:57.206 "bdev_nvme_set_hotplug", 00:04:57.206 "bdev_nvme_set_options", 00:04:57.206 "bdev_passthru_delete", 00:04:57.206 "bdev_passthru_create", 00:04:57.206 "bdev_lvol_set_parent_bdev", 00:04:57.206 "bdev_lvol_set_parent", 00:04:57.206 "bdev_lvol_check_shallow_copy", 00:04:57.206 "bdev_lvol_start_shallow_copy", 00:04:57.206 "bdev_lvol_grow_lvstore", 00:04:57.206 "bdev_lvol_get_lvols", 00:04:57.206 "bdev_lvol_get_lvstores", 00:04:57.206 "bdev_lvol_delete", 00:04:57.206 "bdev_lvol_set_read_only", 00:04:57.206 "bdev_lvol_resize", 00:04:57.206 "bdev_lvol_decouple_parent", 00:04:57.206 "bdev_lvol_inflate", 00:04:57.206 "bdev_lvol_rename", 00:04:57.206 "bdev_lvol_clone_bdev", 00:04:57.206 "bdev_lvol_clone", 00:04:57.206 "bdev_lvol_snapshot", 00:04:57.206 "bdev_lvol_create", 00:04:57.206 "bdev_lvol_delete_lvstore", 00:04:57.206 "bdev_lvol_rename_lvstore", 00:04:57.206 "bdev_lvol_create_lvstore", 00:04:57.206 "bdev_raid_set_options", 00:04:57.206 "bdev_raid_remove_base_bdev", 00:04:57.206 "bdev_raid_add_base_bdev", 00:04:57.206 "bdev_raid_delete", 00:04:57.206 "bdev_raid_create", 00:04:57.206 "bdev_raid_get_bdevs", 00:04:57.206 "bdev_error_inject_error", 00:04:57.206 "bdev_error_delete", 00:04:57.206 "bdev_error_create", 00:04:57.206 "bdev_split_delete", 00:04:57.206 "bdev_split_create", 00:04:57.206 "bdev_delay_delete", 00:04:57.206 "bdev_delay_create", 00:04:57.206 "bdev_delay_update_latency", 00:04:57.206 "bdev_zone_block_delete", 00:04:57.206 "bdev_zone_block_create", 00:04:57.206 "blobfs_create", 00:04:57.206 "blobfs_detect", 00:04:57.206 "blobfs_set_cache_size", 00:04:57.206 "bdev_xnvme_delete", 00:04:57.206 "bdev_xnvme_create", 00:04:57.206 "bdev_aio_delete", 00:04:57.206 "bdev_aio_rescan", 00:04:57.206 "bdev_aio_create", 00:04:57.206 "bdev_ftl_set_property", 00:04:57.206 "bdev_ftl_get_properties", 00:04:57.206 "bdev_ftl_get_stats", 00:04:57.206 "bdev_ftl_unmap", 00:04:57.206 "bdev_ftl_unload", 00:04:57.206 "bdev_ftl_delete", 00:04:57.206 "bdev_ftl_load", 00:04:57.206 "bdev_ftl_create", 00:04:57.206 "bdev_virtio_attach_controller", 00:04:57.206 "bdev_virtio_scsi_get_devices", 00:04:57.206 "bdev_virtio_detach_controller", 00:04:57.206 "bdev_virtio_blk_set_hotplug", 00:04:57.206 "bdev_iscsi_delete", 00:04:57.206 "bdev_iscsi_create", 00:04:57.206 "bdev_iscsi_set_options", 00:04:57.206 "accel_error_inject_error", 00:04:57.206 "ioat_scan_accel_module", 00:04:57.206 "dsa_scan_accel_module", 00:04:57.206 "iaa_scan_accel_module", 00:04:57.206 "keyring_file_remove_key", 00:04:57.206 "keyring_file_add_key", 00:04:57.206 "keyring_linux_set_options", 00:04:57.206 "fsdev_aio_delete", 00:04:57.206 "fsdev_aio_create", 00:04:57.206 "iscsi_get_histogram", 00:04:57.206 "iscsi_enable_histogram", 00:04:57.206 "iscsi_set_options", 00:04:57.206 "iscsi_get_auth_groups", 00:04:57.206 "iscsi_auth_group_remove_secret", 00:04:57.206 "iscsi_auth_group_add_secret", 00:04:57.206 "iscsi_delete_auth_group", 00:04:57.206 "iscsi_create_auth_group", 00:04:57.206 "iscsi_set_discovery_auth", 00:04:57.206 "iscsi_get_options", 00:04:57.206 "iscsi_target_node_request_logout", 00:04:57.206 "iscsi_target_node_set_redirect", 00:04:57.206 "iscsi_target_node_set_auth", 00:04:57.206 "iscsi_target_node_add_lun", 00:04:57.206 "iscsi_get_stats", 00:04:57.206 "iscsi_get_connections", 00:04:57.206 "iscsi_portal_group_set_auth", 00:04:57.206 "iscsi_start_portal_group", 00:04:57.206 "iscsi_delete_portal_group", 00:04:57.206 "iscsi_create_portal_group", 00:04:57.206 "iscsi_get_portal_groups", 00:04:57.206 "iscsi_delete_target_node", 00:04:57.206 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.206 "iscsi_target_node_add_pg_ig_maps", 00:04:57.206 "iscsi_create_target_node", 00:04:57.206 "iscsi_get_target_nodes", 00:04:57.206 "iscsi_delete_initiator_group", 00:04:57.206 "iscsi_initiator_group_remove_initiators", 00:04:57.206 "iscsi_initiator_group_add_initiators", 00:04:57.206 "iscsi_create_initiator_group", 00:04:57.206 "iscsi_get_initiator_groups", 00:04:57.206 "nvmf_set_crdt", 00:04:57.206 "nvmf_set_config", 00:04:57.206 "nvmf_set_max_subsystems", 00:04:57.206 "nvmf_stop_mdns_prr", 00:04:57.206 "nvmf_publish_mdns_prr", 00:04:57.206 "nvmf_subsystem_get_listeners", 00:04:57.206 "nvmf_subsystem_get_qpairs", 00:04:57.206 "nvmf_subsystem_get_controllers", 00:04:57.206 "nvmf_get_stats", 00:04:57.206 "nvmf_get_transports", 00:04:57.206 "nvmf_create_transport", 00:04:57.206 "nvmf_get_targets", 00:04:57.206 "nvmf_delete_target", 00:04:57.206 "nvmf_create_target", 00:04:57.206 "nvmf_subsystem_allow_any_host", 00:04:57.206 "nvmf_subsystem_set_keys", 00:04:57.206 "nvmf_subsystem_remove_host", 00:04:57.206 "nvmf_subsystem_add_host", 00:04:57.206 "nvmf_ns_remove_host", 00:04:57.206 "nvmf_ns_add_host", 00:04:57.206 "nvmf_subsystem_remove_ns", 00:04:57.206 "nvmf_subsystem_set_ns_ana_group", 00:04:57.206 "nvmf_subsystem_add_ns", 00:04:57.206 "nvmf_subsystem_listener_set_ana_state", 00:04:57.206 "nvmf_discovery_get_referrals", 00:04:57.206 "nvmf_discovery_remove_referral", 00:04:57.206 "nvmf_discovery_add_referral", 00:04:57.206 "nvmf_subsystem_remove_listener", 00:04:57.206 "nvmf_subsystem_add_listener", 00:04:57.206 "nvmf_delete_subsystem", 00:04:57.206 "nvmf_create_subsystem", 00:04:57.206 "nvmf_get_subsystems", 00:04:57.206 "env_dpdk_get_mem_stats", 00:04:57.206 "nbd_get_disks", 00:04:57.206 "nbd_stop_disk", 00:04:57.206 "nbd_start_disk", 00:04:57.206 "ublk_recover_disk", 00:04:57.206 "ublk_get_disks", 00:04:57.206 "ublk_stop_disk", 00:04:57.206 "ublk_start_disk", 00:04:57.206 "ublk_destroy_target", 00:04:57.206 "ublk_create_target", 00:04:57.206 "virtio_blk_create_transport", 00:04:57.206 "virtio_blk_get_transports", 00:04:57.206 "vhost_controller_set_coalescing", 00:04:57.206 "vhost_get_controllers", 00:04:57.206 "vhost_delete_controller", 00:04:57.206 "vhost_create_blk_controller", 00:04:57.206 "vhost_scsi_controller_remove_target", 00:04:57.206 "vhost_scsi_controller_add_target", 00:04:57.206 "vhost_start_scsi_controller", 00:04:57.206 "vhost_create_scsi_controller", 00:04:57.206 "thread_set_cpumask", 00:04:57.206 "scheduler_set_options", 00:04:57.206 "framework_get_governor", 00:04:57.206 "framework_get_scheduler", 00:04:57.206 "framework_set_scheduler", 00:04:57.206 "framework_get_reactors", 00:04:57.206 "thread_get_io_channels", 00:04:57.206 "thread_get_pollers", 00:04:57.206 "thread_get_stats", 00:04:57.206 "framework_monitor_context_switch", 00:04:57.207 "spdk_kill_instance", 00:04:57.207 "log_enable_timestamps", 00:04:57.207 "log_get_flags", 00:04:57.207 "log_clear_flag", 00:04:57.207 "log_set_flag", 00:04:57.207 "log_get_level", 00:04:57.207 "log_set_level", 00:04:57.207 "log_get_print_level", 00:04:57.207 "log_set_print_level", 00:04:57.207 "framework_enable_cpumask_locks", 00:04:57.207 "framework_disable_cpumask_locks", 00:04:57.207 "framework_wait_init", 00:04:57.207 "framework_start_init", 00:04:57.207 "scsi_get_devices", 00:04:57.207 "bdev_get_histogram", 00:04:57.207 "bdev_enable_histogram", 00:04:57.207 "bdev_set_qos_limit", 00:04:57.207 "bdev_set_qd_sampling_period", 00:04:57.207 "bdev_get_bdevs", 00:04:57.207 "bdev_reset_iostat", 00:04:57.207 "bdev_get_iostat", 00:04:57.207 "bdev_examine", 00:04:57.207 "bdev_wait_for_examine", 00:04:57.207 "bdev_set_options", 00:04:57.207 "accel_get_stats", 00:04:57.207 "accel_set_options", 00:04:57.207 "accel_set_driver", 00:04:57.207 "accel_crypto_key_destroy", 00:04:57.207 "accel_crypto_keys_get", 00:04:57.207 "accel_crypto_key_create", 00:04:57.207 "accel_assign_opc", 00:04:57.207 "accel_get_module_info", 00:04:57.207 "accel_get_opc_assignments", 00:04:57.207 "vmd_rescan", 00:04:57.207 "vmd_remove_device", 00:04:57.207 "vmd_enable", 00:04:57.207 "sock_get_default_impl", 00:04:57.207 "sock_set_default_impl", 00:04:57.207 "sock_impl_set_options", 00:04:57.207 "sock_impl_get_options", 00:04:57.207 "iobuf_get_stats", 00:04:57.207 "iobuf_set_options", 00:04:57.207 "keyring_get_keys", 00:04:57.207 "framework_get_pci_devices", 00:04:57.207 "framework_get_config", 00:04:57.207 "framework_get_subsystems", 00:04:57.207 "fsdev_set_opts", 00:04:57.207 "fsdev_get_opts", 00:04:57.207 "trace_get_info", 00:04:57.207 "trace_get_tpoint_group_mask", 00:04:57.207 "trace_disable_tpoint_group", 00:04:57.207 "trace_enable_tpoint_group", 00:04:57.207 "trace_clear_tpoint_mask", 00:04:57.207 "trace_set_tpoint_mask", 00:04:57.207 "notify_get_notifications", 00:04:57.207 "notify_get_types", 00:04:57.207 "spdk_get_version", 00:04:57.207 "rpc_get_methods" 00:04:57.207 ] 00:04:57.207 23:06:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.207 23:06:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.207 23:06:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57974 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57974 ']' 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57974 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57974 00:04:57.207 killing process with pid 57974 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57974' 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57974 00:04:57.207 23:06:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57974 00:04:58.584 ************************************ 00:04:58.584 END TEST spdkcli_tcp 00:04:58.584 ************************************ 00:04:58.584 00:04:58.584 real 0m2.462s 00:04:58.584 user 0m4.405s 00:04:58.584 sys 0m0.416s 00:04:58.584 23:06:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.584 23:06:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.584 23:06:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.585 23:06:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.585 23:06:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.585 23:06:30 -- common/autotest_common.sh@10 -- # set +x 00:04:58.585 ************************************ 00:04:58.585 START TEST dpdk_mem_utility 00:04:58.585 ************************************ 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.585 * Looking for test storage... 00:04:58.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.585 23:06:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.585 --rc genhtml_branch_coverage=1 00:04:58.585 --rc genhtml_function_coverage=1 00:04:58.585 --rc genhtml_legend=1 00:04:58.585 --rc geninfo_all_blocks=1 00:04:58.585 --rc geninfo_unexecuted_blocks=1 00:04:58.585 00:04:58.585 ' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.585 --rc genhtml_branch_coverage=1 00:04:58.585 --rc genhtml_function_coverage=1 00:04:58.585 --rc genhtml_legend=1 00:04:58.585 --rc geninfo_all_blocks=1 00:04:58.585 --rc geninfo_unexecuted_blocks=1 00:04:58.585 00:04:58.585 ' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.585 --rc genhtml_branch_coverage=1 00:04:58.585 --rc genhtml_function_coverage=1 00:04:58.585 --rc genhtml_legend=1 00:04:58.585 --rc geninfo_all_blocks=1 00:04:58.585 --rc geninfo_unexecuted_blocks=1 00:04:58.585 00:04:58.585 ' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.585 --rc genhtml_branch_coverage=1 00:04:58.585 --rc genhtml_function_coverage=1 00:04:58.585 --rc genhtml_legend=1 00:04:58.585 --rc geninfo_all_blocks=1 00:04:58.585 --rc geninfo_unexecuted_blocks=1 00:04:58.585 00:04:58.585 ' 00:04:58.585 23:06:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.585 23:06:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58079 00:04:58.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.585 23:06:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58079 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58079 ']' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.585 23:06:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.585 23:06:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.585 [2024-11-25 23:06:30.890762] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:04:58.585 [2024-11-25 23:06:30.891029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58079 ] 00:04:58.850 [2024-11-25 23:06:31.051947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.851 [2024-11-25 23:06:31.157289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.423 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.423 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:59.423 23:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.423 23:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.423 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.423 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.423 { 00:04:59.423 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.423 } 00:04:59.423 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.423 23:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.714 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:59.714 1 heaps totaling size 816.000000 MiB 00:04:59.714 size: 816.000000 MiB heap id: 0 00:04:59.714 end heaps---------- 00:04:59.714 9 mempools totaling size 595.772034 MiB 00:04:59.714 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.714 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.714 size: 92.545471 MiB name: bdev_io_58079 00:04:59.714 size: 50.003479 MiB name: msgpool_58079 00:04:59.714 size: 36.509338 MiB name: fsdev_io_58079 00:04:59.714 size: 21.763794 MiB name: PDU_Pool 00:04:59.714 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.714 size: 4.133484 MiB name: evtpool_58079 00:04:59.714 size: 0.026123 MiB name: Session_Pool 00:04:59.714 end mempools------- 00:04:59.714 6 memzones totaling size 4.142822 MiB 00:04:59.714 size: 1.000366 MiB name: RG_ring_0_58079 00:04:59.714 size: 1.000366 MiB name: RG_ring_1_58079 00:04:59.714 size: 1.000366 MiB name: RG_ring_4_58079 00:04:59.714 size: 1.000366 MiB name: RG_ring_5_58079 00:04:59.714 size: 0.125366 MiB name: RG_ring_2_58079 00:04:59.714 size: 0.015991 MiB name: RG_ring_3_58079 00:04:59.714 end memzones------- 00:04:59.714 23:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.714 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:04:59.714 list of free elements. size: 16.789429 MiB 00:04:59.714 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:59.714 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:59.714 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:59.714 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:59.714 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:59.714 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:59.714 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:59.714 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:59.714 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:59.714 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:59.714 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:59.714 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:04:59.714 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:59.714 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:59.714 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:59.714 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:59.714 element at address: 0x200028000000 with size: 0.391663 MiB 00:04:59.714 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:59.714 list of standard malloc elements. size: 199.289673 MiB 00:04:59.714 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:59.715 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:59.715 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:59.715 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:59.715 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:59.715 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:59.715 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:59.715 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:59.715 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:59.715 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:59.715 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:59.715 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:59.715 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:59.716 element at address: 0x200028064440 with size: 0.000244 MiB 00:04:59.716 element at address: 0x200028064540 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b200 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:59.716 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:59.717 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:59.717 list of memzone associated elements. size: 599.920898 MiB 00:04:59.717 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:59.717 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.717 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:59.717 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.717 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:59.717 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58079_0 00:04:59.717 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:59.717 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58079_0 00:04:59.717 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:59.717 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58079_0 00:04:59.717 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:59.717 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.717 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:59.717 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.717 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:59.717 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58079_0 00:04:59.717 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:59.717 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58079 00:04:59.717 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:59.717 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58079 00:04:59.717 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:59.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.717 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:59.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.717 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:59.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.717 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:59.717 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.717 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:59.717 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58079 00:04:59.717 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:59.717 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58079 00:04:59.717 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:59.717 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58079 00:04:59.717 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:59.717 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58079 00:04:59.717 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:59.717 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58079 00:04:59.717 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:59.717 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58079 00:04:59.717 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:59.717 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.717 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:59.717 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.717 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:59.717 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.717 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:59.717 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58079 00:04:59.717 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:59.717 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58079 00:04:59.717 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:59.717 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.717 element at address: 0x200028064640 with size: 0.023804 MiB 00:04:59.717 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.717 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:59.717 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58079 00:04:59.717 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:04:59.717 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.717 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:59.717 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58079 00:04:59.717 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:59.717 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58079 00:04:59.717 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:59.717 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58079 00:04:59.717 element at address: 0x20002806b300 with size: 0.000366 MiB 00:04:59.717 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.717 23:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.717 23:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58079 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58079 ']' 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58079 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58079 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58079' 00:04:59.717 killing process with pid 58079 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58079 00:04:59.717 23:06:31 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58079 00:05:01.091 00:05:01.091 real 0m2.571s 00:05:01.091 user 0m2.558s 00:05:01.091 sys 0m0.407s 00:05:01.091 23:06:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.091 23:06:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.091 ************************************ 00:05:01.091 END TEST dpdk_mem_utility 00:05:01.091 ************************************ 00:05:01.091 23:06:33 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.091 23:06:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.091 23:06:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.091 23:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:01.091 ************************************ 00:05:01.091 START TEST event 00:05:01.091 ************************************ 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.091 * Looking for test storage... 00:05:01.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.091 23:06:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.091 23:06:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.091 23:06:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.091 23:06:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.091 23:06:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.091 23:06:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.091 23:06:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.091 23:06:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.091 23:06:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.091 23:06:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.091 23:06:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.091 23:06:33 event -- scripts/common.sh@344 -- # case "$op" in 00:05:01.091 23:06:33 event -- scripts/common.sh@345 -- # : 1 00:05:01.091 23:06:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.091 23:06:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.091 23:06:33 event -- scripts/common.sh@365 -- # decimal 1 00:05:01.091 23:06:33 event -- scripts/common.sh@353 -- # local d=1 00:05:01.091 23:06:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.091 23:06:33 event -- scripts/common.sh@355 -- # echo 1 00:05:01.091 23:06:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.091 23:06:33 event -- scripts/common.sh@366 -- # decimal 2 00:05:01.091 23:06:33 event -- scripts/common.sh@353 -- # local d=2 00:05:01.091 23:06:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.091 23:06:33 event -- scripts/common.sh@355 -- # echo 2 00:05:01.091 23:06:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.091 23:06:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.091 23:06:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.091 23:06:33 event -- scripts/common.sh@368 -- # return 0 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.091 23:06:33 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.091 --rc genhtml_branch_coverage=1 00:05:01.091 --rc genhtml_function_coverage=1 00:05:01.091 --rc genhtml_legend=1 00:05:01.091 --rc geninfo_all_blocks=1 00:05:01.091 --rc geninfo_unexecuted_blocks=1 00:05:01.091 00:05:01.091 ' 00:05:01.092 23:06:33 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.092 --rc genhtml_branch_coverage=1 00:05:01.092 --rc genhtml_function_coverage=1 00:05:01.092 --rc genhtml_legend=1 00:05:01.092 --rc geninfo_all_blocks=1 00:05:01.092 --rc geninfo_unexecuted_blocks=1 00:05:01.092 00:05:01.092 ' 00:05:01.092 23:06:33 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.092 --rc genhtml_branch_coverage=1 00:05:01.092 --rc genhtml_function_coverage=1 00:05:01.092 --rc genhtml_legend=1 00:05:01.092 --rc geninfo_all_blocks=1 00:05:01.092 --rc geninfo_unexecuted_blocks=1 00:05:01.092 00:05:01.092 ' 00:05:01.092 23:06:33 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.092 --rc genhtml_branch_coverage=1 00:05:01.092 --rc genhtml_function_coverage=1 00:05:01.092 --rc genhtml_legend=1 00:05:01.092 --rc geninfo_all_blocks=1 00:05:01.092 --rc geninfo_unexecuted_blocks=1 00:05:01.092 00:05:01.092 ' 00:05:01.092 23:06:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:01.092 23:06:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:01.092 23:06:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.092 23:06:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:01.092 23:06:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.092 23:06:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.092 ************************************ 00:05:01.092 START TEST event_perf 00:05:01.092 ************************************ 00:05:01.092 23:06:33 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.349 Running I/O for 1 seconds...[2024-11-25 23:06:33.463270] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:01.349 [2024-11-25 23:06:33.463741] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:05:01.349 [2024-11-25 23:06:33.626496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.607 [2024-11-25 23:06:33.730882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.607 [2024-11-25 23:06:33.731256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.607 [2024-11-25 23:06:33.731894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.607 Running I/O for 1 seconds...[2024-11-25 23:06:33.731915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.541 00:05:02.541 lcore 0: 201136 00:05:02.541 lcore 1: 201138 00:05:02.541 lcore 2: 201134 00:05:02.541 lcore 3: 201134 00:05:02.541 done. 00:05:02.541 00:05:02.541 real 0m1.467s 00:05:02.541 user 0m4.253s 00:05:02.541 sys 0m0.091s 00:05:02.541 ************************************ 00:05:02.541 END TEST event_perf 00:05:02.541 ************************************ 00:05:02.541 23:06:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.541 23:06:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.799 23:06:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.799 23:06:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:02.799 23:06:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.799 23:06:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.799 ************************************ 00:05:02.799 START TEST event_reactor 00:05:02.799 ************************************ 00:05:02.799 23:06:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.799 [2024-11-25 23:06:34.972440] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:02.799 [2024-11-25 23:06:34.972544] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58210 ] 00:05:02.799 [2024-11-25 23:06:35.132178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.056 [2024-11-25 23:06:35.227147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.429 test_start 00:05:04.429 oneshot 00:05:04.429 tick 100 00:05:04.429 tick 100 00:05:04.429 tick 250 00:05:04.429 tick 100 00:05:04.429 tick 100 00:05:04.429 tick 100 00:05:04.429 tick 250 00:05:04.429 tick 500 00:05:04.429 tick 100 00:05:04.429 tick 100 00:05:04.429 tick 250 00:05:04.429 tick 100 00:05:04.429 tick 100 00:05:04.429 test_end 00:05:04.429 00:05:04.429 real 0m1.442s 00:05:04.429 user 0m1.261s 00:05:04.429 sys 0m0.072s 00:05:04.429 23:06:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.429 ************************************ 00:05:04.429 END TEST event_reactor 00:05:04.429 ************************************ 00:05:04.429 23:06:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.429 23:06:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.429 23:06:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.429 23:06:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.429 23:06:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.429 ************************************ 00:05:04.429 START TEST event_reactor_perf 00:05:04.429 ************************************ 00:05:04.429 23:06:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.429 [2024-11-25 23:06:36.476919] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:04.429 [2024-11-25 23:06:36.477106] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:05:04.429 [2024-11-25 23:06:36.651658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.429 [2024-11-25 23:06:36.742181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.806 test_start 00:05:05.806 test_end 00:05:05.806 Performance: 412664 events per second 00:05:05.806 00:05:05.806 real 0m1.421s 00:05:05.806 user 0m1.222s 00:05:05.806 sys 0m0.091s 00:05:05.806 23:06:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.806 ************************************ 00:05:05.806 END TEST event_reactor_perf 00:05:05.806 ************************************ 00:05:05.806 23:06:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.806 23:06:37 event -- event/event.sh@49 -- # uname -s 00:05:05.806 23:06:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.806 23:06:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.806 23:06:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.806 23:06:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.806 23:06:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.806 ************************************ 00:05:05.806 START TEST event_scheduler 00:05:05.806 ************************************ 00:05:05.806 23:06:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.806 * Looking for test storage... 00:05:05.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.806 23:06:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.806 23:06:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.806 23:06:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:05.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.806 23:06:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.806 --rc genhtml_branch_coverage=1 00:05:05.806 --rc genhtml_function_coverage=1 00:05:05.806 --rc genhtml_legend=1 00:05:05.806 --rc geninfo_all_blocks=1 00:05:05.806 --rc geninfo_unexecuted_blocks=1 00:05:05.806 00:05:05.806 ' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.806 --rc genhtml_branch_coverage=1 00:05:05.806 --rc genhtml_function_coverage=1 00:05:05.806 --rc genhtml_legend=1 00:05:05.806 --rc geninfo_all_blocks=1 00:05:05.806 --rc geninfo_unexecuted_blocks=1 00:05:05.806 00:05:05.806 ' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.806 --rc genhtml_branch_coverage=1 00:05:05.806 --rc genhtml_function_coverage=1 00:05:05.806 --rc genhtml_legend=1 00:05:05.806 --rc geninfo_all_blocks=1 00:05:05.806 --rc geninfo_unexecuted_blocks=1 00:05:05.806 00:05:05.806 ' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.806 --rc genhtml_branch_coverage=1 00:05:05.806 --rc genhtml_function_coverage=1 00:05:05.806 --rc genhtml_legend=1 00:05:05.806 --rc geninfo_all_blocks=1 00:05:05.806 --rc geninfo_unexecuted_blocks=1 00:05:05.806 00:05:05.806 ' 00:05:05.806 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.806 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58317 00:05:05.806 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.806 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.806 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58317 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58317 ']' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.806 23:06:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.806 [2024-11-25 23:06:38.126379] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:05.806 [2024-11-25 23:06:38.126686] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58317 ] 00:05:06.066 [2024-11-25 23:06:38.289770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.066 [2024-11-25 23:06:38.414846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.066 [2024-11-25 23:06:38.415450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.066 [2024-11-25 23:06:38.416833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.066 [2024-11-25 23:06:38.416952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:06.664 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.664 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.664 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.664 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.664 POWER: Cannot set governor of lcore 0 to performance 00:05:06.664 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.664 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.664 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.664 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.664 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.664 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.664 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.664 [2024-11-25 23:06:38.986686] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.664 [2024-11-25 23:06:38.986720] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:06.664 [2024-11-25 23:06:38.986802] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.664 [2024-11-25 23:06:38.986834] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.664 [2024-11-25 23:06:38.986854] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.664 [2024-11-25 23:06:38.986920] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.664 23:06:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.664 23:06:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.926 [2024-11-25 23:06:39.223770] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.926 23:06:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.926 23:06:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.926 23:06:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.926 23:06:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.926 23:06:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.926 ************************************ 00:05:06.926 START TEST scheduler_create_thread 00:05:06.926 ************************************ 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.926 2 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.926 3 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.926 4 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.926 5 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.926 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 6 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 7 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 8 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 9 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 10 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 23:06:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.571 23:06:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.571 23:06:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.571 23:06:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.571 23:06:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.571 23:06:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.515 ************************************ 00:05:09.515 END TEST scheduler_create_thread 00:05:09.515 ************************************ 00:05:09.515 23:06:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.515 00:05:09.515 real 0m2.617s 00:05:09.515 user 0m0.020s 00:05:09.515 sys 0m0.002s 00:05:09.515 23:06:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.515 23:06:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.774 23:06:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:09.774 23:06:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58317 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58317 ']' 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58317 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58317 00:05:09.774 killing process with pid 58317 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58317' 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58317 00:05:09.774 23:06:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58317 00:05:10.033 [2024-11-25 23:06:42.342845] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.609 00:05:10.609 real 0m5.031s 00:05:10.609 user 0m8.774s 00:05:10.609 sys 0m0.370s 00:05:10.609 23:06:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.609 ************************************ 00:05:10.610 END TEST event_scheduler 00:05:10.610 ************************************ 00:05:10.610 23:06:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.870 23:06:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.870 23:06:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.870 23:06:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.870 23:06:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.870 23:06:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.870 ************************************ 00:05:10.870 START TEST app_repeat 00:05:10.870 ************************************ 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.870 Process app_repeat pid: 58423 00:05:10.870 spdk_app_start Round 0 00:05:10.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58423 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58423' 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58423 /var/tmp/spdk-nbd.sock 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58423 ']' 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.870 23:06:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.870 23:06:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.870 [2024-11-25 23:06:43.038143] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:10.870 [2024-11-25 23:06:43.038251] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58423 ] 00:05:10.870 [2024-11-25 23:06:43.198013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.131 [2024-11-25 23:06:43.300296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.131 [2024-11-25 23:06:43.300394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.711 23:06:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.711 23:06:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.711 23:06:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.975 Malloc0 00:05:11.975 23:06:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.236 Malloc1 00:05:12.236 23:06:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.236 23:06:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.498 /dev/nbd0 00:05:12.498 23:06:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.498 23:06:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.498 1+0 records in 00:05:12.498 1+0 records out 00:05:12.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267376 s, 15.3 MB/s 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.498 23:06:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.498 23:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.498 23:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.498 23:06:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.498 /dev/nbd1 00:05:12.498 23:06:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.757 23:06:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.757 1+0 records in 00:05:12.757 1+0 records out 00:05:12.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156443 s, 26.2 MB/s 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.757 23:06:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.757 23:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.757 23:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.757 23:06:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.757 23:06:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.757 23:06:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.757 { 00:05:12.757 "nbd_device": "/dev/nbd0", 00:05:12.757 "bdev_name": "Malloc0" 00:05:12.757 }, 00:05:12.757 { 00:05:12.757 "nbd_device": "/dev/nbd1", 00:05:12.757 "bdev_name": "Malloc1" 00:05:12.757 } 00:05:12.757 ]' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.757 { 00:05:12.757 "nbd_device": "/dev/nbd0", 00:05:12.757 "bdev_name": "Malloc0" 00:05:12.757 }, 00:05:12.757 { 00:05:12.757 "nbd_device": "/dev/nbd1", 00:05:12.757 "bdev_name": "Malloc1" 00:05:12.757 } 00:05:12.757 ]' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.757 /dev/nbd1' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.757 /dev/nbd1' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.757 23:06:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.016 256+0 records in 00:05:13.016 256+0 records out 00:05:13.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00884097 s, 119 MB/s 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.016 256+0 records in 00:05:13.016 256+0 records out 00:05:13.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196553 s, 53.3 MB/s 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.016 256+0 records in 00:05:13.016 256+0 records out 00:05:13.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172065 s, 60.9 MB/s 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.016 23:06:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.274 23:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.533 23:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.534 23:06:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.534 23:06:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.792 23:06:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.400 [2024-11-25 23:06:46.726024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.658 [2024-11-25 23:06:46.800577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.658 [2024-11-25 23:06:46.800615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.658 [2024-11-25 23:06:46.902917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.658 [2024-11-25 23:06:46.902970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.188 spdk_app_start Round 1 00:05:17.188 23:06:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.188 23:06:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.188 23:06:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58423 /var/tmp/spdk-nbd.sock 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58423 ']' 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.188 23:06:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.188 23:06:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.447 Malloc0 00:05:17.447 23:06:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.447 Malloc1 00:05:17.447 23:06:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.447 23:06:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.706 /dev/nbd0 00:05:17.706 23:06:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.706 23:06:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.706 1+0 records in 00:05:17.706 1+0 records out 00:05:17.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477945 s, 8.6 MB/s 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.706 23:06:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.706 23:06:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.706 23:06:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.706 23:06:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.964 /dev/nbd1 00:05:17.964 23:06:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.964 23:06:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.964 1+0 records in 00:05:17.964 1+0 records out 00:05:17.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242058 s, 16.9 MB/s 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.964 23:06:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.965 23:06:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.965 23:06:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.965 23:06:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.965 23:06:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.965 23:06:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.965 23:06:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.965 23:06:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.223 { 00:05:18.223 "nbd_device": "/dev/nbd0", 00:05:18.223 "bdev_name": "Malloc0" 00:05:18.223 }, 00:05:18.223 { 00:05:18.223 "nbd_device": "/dev/nbd1", 00:05:18.223 "bdev_name": "Malloc1" 00:05:18.223 } 00:05:18.223 ]' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.223 { 00:05:18.223 "nbd_device": "/dev/nbd0", 00:05:18.223 "bdev_name": "Malloc0" 00:05:18.223 }, 00:05:18.223 { 00:05:18.223 "nbd_device": "/dev/nbd1", 00:05:18.223 "bdev_name": "Malloc1" 00:05:18.223 } 00:05:18.223 ]' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.223 /dev/nbd1' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.223 /dev/nbd1' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.223 256+0 records in 00:05:18.223 256+0 records out 00:05:18.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120104 s, 87.3 MB/s 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.223 256+0 records in 00:05:18.223 256+0 records out 00:05:18.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191326 s, 54.8 MB/s 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.223 256+0 records in 00:05:18.223 256+0 records out 00:05:18.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015781 s, 66.4 MB/s 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.223 23:06:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.480 23:06:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.737 23:06:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.994 23:06:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.994 23:06:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.253 23:06:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.820 [2024-11-25 23:06:52.059026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.820 [2024-11-25 23:06:52.133374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.820 [2024-11-25 23:06:52.133379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.078 [2024-11-25 23:06:52.231839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.078 [2024-11-25 23:06:52.231898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.605 spdk_app_start Round 2 00:05:22.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.605 23:06:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.605 23:06:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.605 23:06:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58423 /var/tmp/spdk-nbd.sock 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58423 ']' 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.605 23:06:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.605 23:06:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.605 Malloc0 00:05:22.605 23:06:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.863 Malloc1 00:05:22.863 23:06:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.863 23:06:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.121 /dev/nbd0 00:05:23.121 23:06:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.121 23:06:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.121 1+0 records in 00:05:23.121 1+0 records out 00:05:23.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221299 s, 18.5 MB/s 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.121 23:06:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.121 23:06:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.121 23:06:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.121 23:06:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.380 /dev/nbd1 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.380 1+0 records in 00:05:23.380 1+0 records out 00:05:23.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022715 s, 18.0 MB/s 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.380 23:06:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.380 23:06:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.638 { 00:05:23.638 "nbd_device": "/dev/nbd0", 00:05:23.638 "bdev_name": "Malloc0" 00:05:23.638 }, 00:05:23.638 { 00:05:23.638 "nbd_device": "/dev/nbd1", 00:05:23.638 "bdev_name": "Malloc1" 00:05:23.638 } 00:05:23.638 ]' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.638 { 00:05:23.638 "nbd_device": "/dev/nbd0", 00:05:23.638 "bdev_name": "Malloc0" 00:05:23.638 }, 00:05:23.638 { 00:05:23.638 "nbd_device": "/dev/nbd1", 00:05:23.638 "bdev_name": "Malloc1" 00:05:23.638 } 00:05:23.638 ]' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.638 /dev/nbd1' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.638 /dev/nbd1' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.638 256+0 records in 00:05:23.638 256+0 records out 00:05:23.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722202 s, 145 MB/s 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.638 256+0 records in 00:05:23.638 256+0 records out 00:05:23.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179068 s, 58.6 MB/s 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.638 256+0 records in 00:05:23.638 256+0 records out 00:05:23.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158355 s, 66.2 MB/s 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.638 23:06:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.896 23:06:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.153 23:06:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.411 23:06:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.412 23:06:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.412 23:06:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.412 23:06:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.412 23:06:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.669 23:06:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.237 [2024-11-25 23:06:57.501459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.496 [2024-11-25 23:06:57.614666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.496 [2024-11-25 23:06:57.614814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.496 [2024-11-25 23:06:57.750339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.496 [2024-11-25 23:06:57.750417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.025 23:06:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58423 /var/tmp/spdk-nbd.sock 00:05:28.025 23:06:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58423 ']' 00:05:28.025 23:06:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.025 23:06:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.025 23:06:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.026 23:06:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.026 23:06:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.026 23:07:00 event.app_repeat -- event/event.sh@39 -- # killprocess 58423 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58423 ']' 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58423 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58423 00:05:28.026 killing process with pid 58423 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58423' 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58423 00:05:28.026 23:07:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58423 00:05:28.594 spdk_app_start is called in Round 0. 00:05:28.594 Shutdown signal received, stop current app iteration 00:05:28.594 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 reinitialization... 00:05:28.594 spdk_app_start is called in Round 1. 00:05:28.594 Shutdown signal received, stop current app iteration 00:05:28.594 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 reinitialization... 00:05:28.594 spdk_app_start is called in Round 2. 00:05:28.594 Shutdown signal received, stop current app iteration 00:05:28.594 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 reinitialization... 00:05:28.594 spdk_app_start is called in Round 3. 00:05:28.594 Shutdown signal received, stop current app iteration 00:05:28.594 23:07:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.594 23:07:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.594 00:05:28.594 real 0m17.816s 00:05:28.594 user 0m38.890s 00:05:28.594 sys 0m2.088s 00:05:28.594 23:07:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.594 ************************************ 00:05:28.594 END TEST app_repeat 00:05:28.594 ************************************ 00:05:28.594 23:07:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.594 23:07:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.594 23:07:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.594 23:07:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.594 23:07:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.594 23:07:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.594 ************************************ 00:05:28.594 START TEST cpu_locks 00:05:28.594 ************************************ 00:05:28.594 23:07:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.594 * Looking for test storage... 00:05:28.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.594 23:07:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.594 23:07:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.594 23:07:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.853 23:07:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.853 23:07:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.854 23:07:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.854 --rc genhtml_branch_coverage=1 00:05:28.854 --rc genhtml_function_coverage=1 00:05:28.854 --rc genhtml_legend=1 00:05:28.854 --rc geninfo_all_blocks=1 00:05:28.854 --rc geninfo_unexecuted_blocks=1 00:05:28.854 00:05:28.854 ' 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.854 --rc genhtml_branch_coverage=1 00:05:28.854 --rc genhtml_function_coverage=1 00:05:28.854 --rc genhtml_legend=1 00:05:28.854 --rc geninfo_all_blocks=1 00:05:28.854 --rc geninfo_unexecuted_blocks=1 00:05:28.854 00:05:28.854 ' 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.854 --rc genhtml_branch_coverage=1 00:05:28.854 --rc genhtml_function_coverage=1 00:05:28.854 --rc genhtml_legend=1 00:05:28.854 --rc geninfo_all_blocks=1 00:05:28.854 --rc geninfo_unexecuted_blocks=1 00:05:28.854 00:05:28.854 ' 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.854 --rc genhtml_branch_coverage=1 00:05:28.854 --rc genhtml_function_coverage=1 00:05:28.854 --rc genhtml_legend=1 00:05:28.854 --rc geninfo_all_blocks=1 00:05:28.854 --rc geninfo_unexecuted_blocks=1 00:05:28.854 00:05:28.854 ' 00:05:28.854 23:07:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.854 23:07:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.854 23:07:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.854 23:07:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.854 23:07:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.854 ************************************ 00:05:28.854 START TEST default_locks 00:05:28.854 ************************************ 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58854 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58854 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58854 ']' 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.854 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.854 [2024-11-25 23:07:01.076698] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:28.854 [2024-11-25 23:07:01.076808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58854 ] 00:05:29.114 [2024-11-25 23:07:01.236766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.114 [2024-11-25 23:07:01.336194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.681 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.681 23:07:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:29.681 23:07:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58854 00:05:29.681 23:07:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58854 00:05:29.681 23:07:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58854 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58854 ']' 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58854 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58854 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.939 killing process with pid 58854 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58854' 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58854 00:05:29.939 23:07:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58854 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58854 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58854 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58854 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58854 ']' 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58854) - No such process 00:05:31.850 ERROR: process (pid: 58854) is no longer running 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.850 00:05:31.850 real 0m2.693s 00:05:31.850 user 0m2.683s 00:05:31.850 sys 0m0.457s 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.850 23:07:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.850 ************************************ 00:05:31.850 END TEST default_locks 00:05:31.850 ************************************ 00:05:31.850 23:07:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.850 23:07:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.850 23:07:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.850 23:07:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.850 ************************************ 00:05:31.850 START TEST default_locks_via_rpc 00:05:31.850 ************************************ 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58918 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58918 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58918 ']' 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.850 23:07:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.850 [2024-11-25 23:07:03.807084] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:31.850 [2024-11-25 23:07:03.807202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58918 ] 00:05:31.850 [2024-11-25 23:07:03.963666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.850 [2024-11-25 23:07:04.061494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58918 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58918 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58918 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58918 ']' 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58918 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.422 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.680 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58918 00:05:32.680 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.680 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.680 killing process with pid 58918 00:05:32.680 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58918' 00:05:32.680 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58918 00:05:32.680 23:07:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58918 00:05:34.065 00:05:34.065 real 0m2.567s 00:05:34.065 user 0m2.561s 00:05:34.065 sys 0m0.406s 00:05:34.065 23:07:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.065 23:07:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.065 ************************************ 00:05:34.065 END TEST default_locks_via_rpc 00:05:34.065 ************************************ 00:05:34.065 23:07:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.065 23:07:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.065 23:07:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.065 23:07:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.065 ************************************ 00:05:34.065 START TEST non_locking_app_on_locked_coremask 00:05:34.065 ************************************ 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58970 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58970 /var/tmp/spdk.sock 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58970 ']' 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.065 23:07:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.326 [2024-11-25 23:07:06.438195] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:34.326 [2024-11-25 23:07:06.438315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58970 ] 00:05:34.326 [2024-11-25 23:07:06.590629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.587 [2024-11-25 23:07:06.713869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58986 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58986 /var/tmp/spdk2.sock 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58986 ']' 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.158 23:07:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.158 [2024-11-25 23:07:07.396441] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:35.158 [2024-11-25 23:07:07.396558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:05:35.415 [2024-11-25 23:07:07.570138] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.415 [2024-11-25 23:07:07.570180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.415 [2024-11-25 23:07:07.768490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.789 23:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.789 23:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.789 23:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58970 00:05:36.789 23:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.789 23:07:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58970 00:05:36.789 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58970 00:05:36.789 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58970 ']' 00:05:36.789 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58970 00:05:36.789 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.789 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.789 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58970 00:05:37.045 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.045 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.045 killing process with pid 58970 00:05:37.045 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58970' 00:05:37.045 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58970 00:05:37.045 23:07:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58970 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58986 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58986 ']' 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58986 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58986 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.636 killing process with pid 58986 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58986' 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58986 00:05:39.636 23:07:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58986 00:05:40.584 00:05:40.584 real 0m6.340s 00:05:40.584 user 0m6.590s 00:05:40.584 sys 0m0.834s 00:05:40.584 23:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.584 ************************************ 00:05:40.584 END TEST non_locking_app_on_locked_coremask 00:05:40.584 ************************************ 00:05:40.584 23:07:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.584 23:07:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:40.584 23:07:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.584 23:07:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.584 23:07:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.584 ************************************ 00:05:40.584 START TEST locking_app_on_unlocked_coremask 00:05:40.584 ************************************ 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59077 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59077 /var/tmp/spdk.sock 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59077 ']' 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.584 23:07:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.584 [2024-11-25 23:07:12.849927] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:40.584 [2024-11-25 23:07:12.850038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:05:40.844 [2024-11-25 23:07:13.006450] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.844 [2024-11-25 23:07:13.006486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.844 [2024-11-25 23:07:13.106573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59093 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59093 /var/tmp/spdk2.sock 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59093 ']' 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.414 23:07:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.414 [2024-11-25 23:07:13.774937] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:41.414 [2024-11-25 23:07:13.775078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:05:41.674 [2024-11-25 23:07:13.949405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.935 [2024-11-25 23:07:14.156234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59093 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59093 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59077 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59077 ']' 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59077 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59077 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.316 killing process with pid 59077 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59077' 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59077 00:05:43.316 23:07:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59077 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59093 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59093 ']' 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59093 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59093 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.848 killing process with pid 59093 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59093' 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59093 00:05:45.848 23:07:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59093 00:05:47.222 ************************************ 00:05:47.222 00:05:47.222 real 0m6.520s 00:05:47.222 user 0m6.747s 00:05:47.222 sys 0m0.862s 00:05:47.222 23:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.222 23:07:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.222 END TEST locking_app_on_unlocked_coremask 00:05:47.222 ************************************ 00:05:47.222 23:07:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:47.222 23:07:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.222 23:07:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.222 23:07:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.222 ************************************ 00:05:47.222 START TEST locking_app_on_locked_coremask 00:05:47.222 ************************************ 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59195 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59195 /var/tmp/spdk.sock 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59195 ']' 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.223 23:07:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.223 [2024-11-25 23:07:19.419266] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:47.223 [2024-11-25 23:07:19.419379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:05:47.223 [2024-11-25 23:07:19.575015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.480 [2024-11-25 23:07:19.652638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59210 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59210 /var/tmp/spdk2.sock 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59210 /var/tmp/spdk2.sock 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59210 /var/tmp/spdk2.sock 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59210 ']' 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.068 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.068 [2024-11-25 23:07:20.309747] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:48.069 [2024-11-25 23:07:20.309855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:05:48.360 [2024-11-25 23:07:20.471570] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59195 has claimed it. 00:05:48.360 [2024-11-25 23:07:20.471614] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.619 ERROR: process (pid: 59210) is no longer running 00:05:48.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59210) - No such process 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59195 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59195 00:05:48.619 23:07:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59195 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59195 ']' 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59195 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59195 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.880 killing process with pid 59195 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59195' 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59195 00:05:48.880 23:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59195 00:05:50.267 00:05:50.267 real 0m3.285s 00:05:50.267 user 0m3.505s 00:05:50.267 sys 0m0.520s 00:05:50.267 ************************************ 00:05:50.267 END TEST locking_app_on_locked_coremask 00:05:50.267 ************************************ 00:05:50.267 23:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.267 23:07:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.526 23:07:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:50.526 23:07:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.526 23:07:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.526 23:07:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.526 ************************************ 00:05:50.526 START TEST locking_overlapped_coremask 00:05:50.526 ************************************ 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59264 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59264 /var/tmp/spdk.sock 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59264 ']' 00:05:50.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:50.526 23:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.526 [2024-11-25 23:07:22.754041] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:50.526 [2024-11-25 23:07:22.754146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:05:50.784 [2024-11-25 23:07:22.901022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.785 [2024-11-25 23:07:22.979132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.785 [2024-11-25 23:07:22.979182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.785 [2024-11-25 23:07:22.979148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59282 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59282 /var/tmp/spdk2.sock 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59282 /var/tmp/spdk2.sock 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:51.352 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59282 /var/tmp/spdk2.sock 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59282 ']' 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.353 23:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.353 [2024-11-25 23:07:23.667730] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:51.353 [2024-11-25 23:07:23.667837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:05:51.610 [2024-11-25 23:07:23.843914] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59264 has claimed it. 00:05:51.610 [2024-11-25 23:07:23.843966] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.174 ERROR: process (pid: 59282) is no longer running 00:05:52.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59282) - No such process 00:05:52.174 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.174 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:52.174 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59264 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59264 ']' 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59264 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59264 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.175 killing process with pid 59264 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59264' 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59264 00:05:52.175 23:07:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59264 00:05:53.556 00:05:53.556 real 0m2.796s 00:05:53.556 user 0m7.640s 00:05:53.556 sys 0m0.439s 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.556 ************************************ 00:05:53.556 END TEST locking_overlapped_coremask 00:05:53.556 ************************************ 00:05:53.556 23:07:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:53.556 23:07:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.556 23:07:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.556 23:07:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.556 ************************************ 00:05:53.556 START TEST locking_overlapped_coremask_via_rpc 00:05:53.556 ************************************ 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59335 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59335 /var/tmp/spdk.sock 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59335 ']' 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.556 23:07:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.556 [2024-11-25 23:07:25.625405] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:53.556 [2024-11-25 23:07:25.625530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59335 ] 00:05:53.556 [2024-11-25 23:07:25.785593] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.556 [2024-11-25 23:07:25.785644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.556 [2024-11-25 23:07:25.892322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.556 [2024-11-25 23:07:25.892627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.556 [2024-11-25 23:07:25.892741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59353 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59353 /var/tmp/spdk2.sock 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.496 23:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.496 [2024-11-25 23:07:26.579591] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:54.496 [2024-11-25 23:07:26.579718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:05:54.496 [2024-11-25 23:07:26.758234] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.496 [2024-11-25 23:07:26.758297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.759 [2024-11-25 23:07:26.972417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.759 [2024-11-25 23:07:26.972569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.759 [2024-11-25 23:07:26.972585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.139 [2024-11-25 23:07:28.175200] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59335 has claimed it. 00:05:56.139 request: 00:05:56.139 { 00:05:56.139 "method": "framework_enable_cpumask_locks", 00:05:56.139 "req_id": 1 00:05:56.139 } 00:05:56.139 Got JSON-RPC error response 00:05:56.139 response: 00:05:56.139 { 00:05:56.139 "code": -32603, 00:05:56.139 "message": "Failed to claim CPU core: 2" 00:05:56.139 } 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59335 /var/tmp/spdk.sock 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59335 ']' 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.139 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59353 /var/tmp/spdk2.sock 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59353 ']' 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.140 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.398 00:05:56.398 real 0m3.017s 00:05:56.398 user 0m1.051s 00:05:56.398 sys 0m0.113s 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.398 23:07:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.398 ************************************ 00:05:56.398 END TEST locking_overlapped_coremask_via_rpc 00:05:56.398 ************************************ 00:05:56.398 23:07:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:56.398 23:07:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59335 ]] 00:05:56.398 23:07:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59335 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59335 ']' 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59335 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59335 00:05:56.398 killing process with pid 59335 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59335' 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59335 00:05:56.398 23:07:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59335 00:05:57.773 23:07:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59353 ]] 00:05:57.773 23:07:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59353 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59353 ']' 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59353 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59353 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59353' 00:05:57.773 killing process with pid 59353 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59353 00:05:57.773 23:07:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59353 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59335 ]] 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59335 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59335 ']' 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59335 00:05:59.150 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59335) - No such process 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59335 is not found' 00:05:59.150 Process with pid 59335 is not found 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59353 ]] 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59353 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59353 ']' 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59353 00:05:59.150 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59353) - No such process 00:05:59.150 Process with pid 59353 is not found 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59353 is not found' 00:05:59.150 23:07:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.150 00:05:59.150 real 0m30.342s 00:05:59.150 user 0m52.023s 00:05:59.150 sys 0m4.495s 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.150 23:07:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.150 ************************************ 00:05:59.150 END TEST cpu_locks 00:05:59.150 ************************************ 00:05:59.150 00:05:59.150 real 0m57.951s 00:05:59.150 user 1m46.594s 00:05:59.150 sys 0m7.434s 00:05:59.150 23:07:31 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.150 ************************************ 00:05:59.150 END TEST event 00:05:59.150 ************************************ 00:05:59.150 23:07:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.150 23:07:31 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:59.150 23:07:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.150 23:07:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.150 23:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:59.150 ************************************ 00:05:59.150 START TEST thread 00:05:59.150 ************************************ 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:59.150 * Looking for test storage... 00:05:59.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.150 23:07:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.150 23:07:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.150 23:07:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.150 23:07:31 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.150 23:07:31 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.150 23:07:31 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.150 23:07:31 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.150 23:07:31 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.150 23:07:31 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.150 23:07:31 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.150 23:07:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.150 23:07:31 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:59.150 23:07:31 thread -- scripts/common.sh@345 -- # : 1 00:05:59.150 23:07:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.150 23:07:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.150 23:07:31 thread -- scripts/common.sh@365 -- # decimal 1 00:05:59.150 23:07:31 thread -- scripts/common.sh@353 -- # local d=1 00:05:59.150 23:07:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.150 23:07:31 thread -- scripts/common.sh@355 -- # echo 1 00:05:59.150 23:07:31 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.150 23:07:31 thread -- scripts/common.sh@366 -- # decimal 2 00:05:59.150 23:07:31 thread -- scripts/common.sh@353 -- # local d=2 00:05:59.150 23:07:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.150 23:07:31 thread -- scripts/common.sh@355 -- # echo 2 00:05:59.150 23:07:31 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.150 23:07:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.150 23:07:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.150 23:07:31 thread -- scripts/common.sh@368 -- # return 0 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.150 --rc genhtml_branch_coverage=1 00:05:59.150 --rc genhtml_function_coverage=1 00:05:59.150 --rc genhtml_legend=1 00:05:59.150 --rc geninfo_all_blocks=1 00:05:59.150 --rc geninfo_unexecuted_blocks=1 00:05:59.150 00:05:59.150 ' 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.150 --rc genhtml_branch_coverage=1 00:05:59.150 --rc genhtml_function_coverage=1 00:05:59.150 --rc genhtml_legend=1 00:05:59.150 --rc geninfo_all_blocks=1 00:05:59.150 --rc geninfo_unexecuted_blocks=1 00:05:59.150 00:05:59.150 ' 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.150 --rc genhtml_branch_coverage=1 00:05:59.150 --rc genhtml_function_coverage=1 00:05:59.150 --rc genhtml_legend=1 00:05:59.150 --rc geninfo_all_blocks=1 00:05:59.150 --rc geninfo_unexecuted_blocks=1 00:05:59.150 00:05:59.150 ' 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.150 --rc genhtml_branch_coverage=1 00:05:59.150 --rc genhtml_function_coverage=1 00:05:59.150 --rc genhtml_legend=1 00:05:59.150 --rc geninfo_all_blocks=1 00:05:59.150 --rc geninfo_unexecuted_blocks=1 00:05:59.150 00:05:59.150 ' 00:05:59.150 23:07:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.150 23:07:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.150 ************************************ 00:05:59.150 START TEST thread_poller_perf 00:05:59.150 ************************************ 00:05:59.150 23:07:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.150 [2024-11-25 23:07:31.452197] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:05:59.150 [2024-11-25 23:07:31.452280] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59508 ] 00:05:59.412 [2024-11-25 23:07:31.607641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.412 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.412 [2024-11-25 23:07:31.708167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.800 [2024-11-25T23:07:33.169Z] ====================================== 00:06:00.800 [2024-11-25T23:07:33.169Z] busy:2611097492 (cyc) 00:06:00.800 [2024-11-25T23:07:33.169Z] total_run_count: 306000 00:06:00.800 [2024-11-25T23:07:33.169Z] tsc_hz: 2600000000 (cyc) 00:06:00.800 [2024-11-25T23:07:33.169Z] ====================================== 00:06:00.800 [2024-11-25T23:07:33.169Z] poller_cost: 8532 (cyc), 3281 (nsec) 00:06:00.800 00:06:00.800 real 0m1.446s 00:06:00.800 user 0m1.276s 00:06:00.800 sys 0m0.063s 00:06:00.800 23:07:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.800 ************************************ 00:06:00.800 END TEST thread_poller_perf 00:06:00.800 ************************************ 00:06:00.800 23:07:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.800 23:07:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.800 23:07:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.800 23:07:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.800 23:07:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.800 ************************************ 00:06:00.800 START TEST thread_poller_perf 00:06:00.800 ************************************ 00:06:00.800 23:07:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.800 [2024-11-25 23:07:32.950194] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:00.800 [2024-11-25 23:07:32.950299] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59544 ] 00:06:00.800 [2024-11-25 23:07:33.107050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.060 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.060 [2024-11-25 23:07:33.206844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.999 [2024-11-25T23:07:34.368Z] ====================================== 00:06:01.999 [2024-11-25T23:07:34.368Z] busy:2603430508 (cyc) 00:06:01.999 [2024-11-25T23:07:34.368Z] total_run_count: 3949000 00:06:01.999 [2024-11-25T23:07:34.368Z] tsc_hz: 2600000000 (cyc) 00:06:01.999 [2024-11-25T23:07:34.368Z] ====================================== 00:06:01.999 [2024-11-25T23:07:34.368Z] poller_cost: 659 (cyc), 253 (nsec) 00:06:01.999 00:06:01.999 real 0m1.438s 00:06:01.999 user 0m1.265s 00:06:01.999 sys 0m0.065s 00:06:01.999 23:07:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.999 23:07:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.999 ************************************ 00:06:01.999 END TEST thread_poller_perf 00:06:01.999 ************************************ 00:06:02.270 23:07:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.270 00:06:02.270 real 0m3.117s 00:06:02.270 user 0m2.649s 00:06:02.270 sys 0m0.250s 00:06:02.270 23:07:34 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.270 23:07:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.270 ************************************ 00:06:02.270 END TEST thread 00:06:02.270 ************************************ 00:06:02.270 23:07:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:02.270 23:07:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:02.270 23:07:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.270 23:07:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.270 23:07:34 -- common/autotest_common.sh@10 -- # set +x 00:06:02.270 ************************************ 00:06:02.270 START TEST app_cmdline 00:06:02.270 ************************************ 00:06:02.270 23:07:34 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:02.270 * Looking for test storage... 00:06:02.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:02.270 23:07:34 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.270 23:07:34 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.270 23:07:34 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.270 23:07:34 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:02.270 23:07:34 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.271 23:07:34 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.271 --rc genhtml_branch_coverage=1 00:06:02.271 --rc genhtml_function_coverage=1 00:06:02.271 --rc genhtml_legend=1 00:06:02.271 --rc geninfo_all_blocks=1 00:06:02.271 --rc geninfo_unexecuted_blocks=1 00:06:02.271 00:06:02.271 ' 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.271 --rc genhtml_branch_coverage=1 00:06:02.271 --rc genhtml_function_coverage=1 00:06:02.271 --rc genhtml_legend=1 00:06:02.271 --rc geninfo_all_blocks=1 00:06:02.271 --rc geninfo_unexecuted_blocks=1 00:06:02.271 00:06:02.271 ' 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.271 --rc genhtml_branch_coverage=1 00:06:02.271 --rc genhtml_function_coverage=1 00:06:02.271 --rc genhtml_legend=1 00:06:02.271 --rc geninfo_all_blocks=1 00:06:02.271 --rc geninfo_unexecuted_blocks=1 00:06:02.271 00:06:02.271 ' 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.271 --rc genhtml_branch_coverage=1 00:06:02.271 --rc genhtml_function_coverage=1 00:06:02.271 --rc genhtml_legend=1 00:06:02.271 --rc geninfo_all_blocks=1 00:06:02.271 --rc geninfo_unexecuted_blocks=1 00:06:02.271 00:06:02.271 ' 00:06:02.271 23:07:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:02.271 23:07:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59628 00:06:02.271 23:07:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59628 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59628 ']' 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.271 23:07:34 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.271 23:07:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.542 [2024-11-25 23:07:34.646951] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:02.542 [2024-11-25 23:07:34.647086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:06:02.542 [2024-11-25 23:07:34.805678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.542 [2024-11-25 23:07:34.905624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:03.482 { 00:06:03.482 "version": "SPDK v25.01-pre git sha1 2a91567e4", 00:06:03.482 "fields": { 00:06:03.482 "major": 25, 00:06:03.482 "minor": 1, 00:06:03.482 "patch": 0, 00:06:03.482 "suffix": "-pre", 00:06:03.482 "commit": "2a91567e4" 00:06:03.482 } 00:06:03.482 } 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:03.482 23:07:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:03.482 23:07:35 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.740 request: 00:06:03.740 { 00:06:03.740 "method": "env_dpdk_get_mem_stats", 00:06:03.740 "req_id": 1 00:06:03.740 } 00:06:03.740 Got JSON-RPC error response 00:06:03.740 response: 00:06:03.740 { 00:06:03.740 "code": -32601, 00:06:03.740 "message": "Method not found" 00:06:03.740 } 00:06:03.740 23:07:35 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:03.740 23:07:35 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.740 23:07:35 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.740 23:07:35 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.740 23:07:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59628 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59628 ']' 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59628 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59628 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.741 killing process with pid 59628 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59628' 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@973 -- # kill 59628 00:06:03.741 23:07:35 app_cmdline -- common/autotest_common.sh@978 -- # wait 59628 00:06:05.115 00:06:05.116 real 0m2.933s 00:06:05.116 user 0m3.240s 00:06:05.116 sys 0m0.407s 00:06:05.116 23:07:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.116 ************************************ 00:06:05.116 END TEST app_cmdline 00:06:05.116 ************************************ 00:06:05.116 23:07:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.116 23:07:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:05.116 23:07:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.116 23:07:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.116 23:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.116 ************************************ 00:06:05.116 START TEST version 00:06:05.116 ************************************ 00:06:05.116 23:07:37 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:05.116 * Looking for test storage... 00:06:05.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:05.116 23:07:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.374 23:07:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.374 23:07:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.374 23:07:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.374 23:07:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.374 23:07:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.374 23:07:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.374 23:07:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.374 23:07:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.374 23:07:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.374 23:07:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.374 23:07:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.374 23:07:37 version -- scripts/common.sh@344 -- # case "$op" in 00:06:05.374 23:07:37 version -- scripts/common.sh@345 -- # : 1 00:06:05.374 23:07:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.374 23:07:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.374 23:07:37 version -- scripts/common.sh@365 -- # decimal 1 00:06:05.374 23:07:37 version -- scripts/common.sh@353 -- # local d=1 00:06:05.374 23:07:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.374 23:07:37 version -- scripts/common.sh@355 -- # echo 1 00:06:05.374 23:07:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.374 23:07:37 version -- scripts/common.sh@366 -- # decimal 2 00:06:05.374 23:07:37 version -- scripts/common.sh@353 -- # local d=2 00:06:05.374 23:07:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.374 23:07:37 version -- scripts/common.sh@355 -- # echo 2 00:06:05.374 23:07:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.374 23:07:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.374 23:07:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.374 23:07:37 version -- scripts/common.sh@368 -- # return 0 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.374 --rc genhtml_branch_coverage=1 00:06:05.374 --rc genhtml_function_coverage=1 00:06:05.374 --rc genhtml_legend=1 00:06:05.374 --rc geninfo_all_blocks=1 00:06:05.374 --rc geninfo_unexecuted_blocks=1 00:06:05.374 00:06:05.374 ' 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.374 --rc genhtml_branch_coverage=1 00:06:05.374 --rc genhtml_function_coverage=1 00:06:05.374 --rc genhtml_legend=1 00:06:05.374 --rc geninfo_all_blocks=1 00:06:05.374 --rc geninfo_unexecuted_blocks=1 00:06:05.374 00:06:05.374 ' 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.374 --rc genhtml_branch_coverage=1 00:06:05.374 --rc genhtml_function_coverage=1 00:06:05.374 --rc genhtml_legend=1 00:06:05.374 --rc geninfo_all_blocks=1 00:06:05.374 --rc geninfo_unexecuted_blocks=1 00:06:05.374 00:06:05.374 ' 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.374 --rc genhtml_branch_coverage=1 00:06:05.374 --rc genhtml_function_coverage=1 00:06:05.374 --rc genhtml_legend=1 00:06:05.374 --rc geninfo_all_blocks=1 00:06:05.374 --rc geninfo_unexecuted_blocks=1 00:06:05.374 00:06:05.374 ' 00:06:05.374 23:07:37 version -- app/version.sh@17 -- # get_header_version major 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.374 23:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.374 23:07:37 version -- app/version.sh@17 -- # major=25 00:06:05.374 23:07:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:05.374 23:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.374 23:07:37 version -- app/version.sh@18 -- # minor=1 00:06:05.374 23:07:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:05.374 23:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.374 23:07:37 version -- app/version.sh@19 -- # patch=0 00:06:05.374 23:07:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.374 23:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:05.374 23:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.374 23:07:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:05.374 23:07:37 version -- app/version.sh@22 -- # version=25.1 00:06:05.374 23:07:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:05.374 23:07:37 version -- app/version.sh@28 -- # version=25.1rc0 00:06:05.374 23:07:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:05.374 23:07:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:05.374 23:07:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:05.374 23:07:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:05.374 00:06:05.374 real 0m0.196s 00:06:05.374 user 0m0.136s 00:06:05.374 sys 0m0.090s 00:06:05.374 23:07:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.374 ************************************ 00:06:05.374 END TEST version 00:06:05.374 ************************************ 00:06:05.374 23:07:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:05.374 23:07:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:05.374 23:07:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:05.374 23:07:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:05.374 23:07:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:05.374 23:07:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.374 23:07:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.374 23:07:37 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:05.374 23:07:37 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:05.374 23:07:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.374 23:07:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.374 23:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.374 ************************************ 00:06:05.374 START TEST blockdev_nvme 00:06:05.374 ************************************ 00:06:05.374 23:07:37 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:05.374 * Looking for test storage... 00:06:05.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:05.374 23:07:37 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.374 23:07:37 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.374 23:07:37 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.634 23:07:37 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:05.634 23:07:37 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.635 23:07:37 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.635 23:07:37 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.635 23:07:37 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.635 --rc genhtml_branch_coverage=1 00:06:05.635 --rc genhtml_function_coverage=1 00:06:05.635 --rc genhtml_legend=1 00:06:05.635 --rc geninfo_all_blocks=1 00:06:05.635 --rc geninfo_unexecuted_blocks=1 00:06:05.635 00:06:05.635 ' 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.635 --rc genhtml_branch_coverage=1 00:06:05.635 --rc genhtml_function_coverage=1 00:06:05.635 --rc genhtml_legend=1 00:06:05.635 --rc geninfo_all_blocks=1 00:06:05.635 --rc geninfo_unexecuted_blocks=1 00:06:05.635 00:06:05.635 ' 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.635 --rc genhtml_branch_coverage=1 00:06:05.635 --rc genhtml_function_coverage=1 00:06:05.635 --rc genhtml_legend=1 00:06:05.635 --rc geninfo_all_blocks=1 00:06:05.635 --rc geninfo_unexecuted_blocks=1 00:06:05.635 00:06:05.635 ' 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.635 --rc genhtml_branch_coverage=1 00:06:05.635 --rc genhtml_function_coverage=1 00:06:05.635 --rc genhtml_legend=1 00:06:05.635 --rc geninfo_all_blocks=1 00:06:05.635 --rc geninfo_unexecuted_blocks=1 00:06:05.635 00:06:05.635 ' 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:05.635 23:07:37 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59800 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59800 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59800 ']' 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.635 23:07:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.635 23:07:37 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:05.635 [2024-11-25 23:07:37.835868] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:05.635 [2024-11-25 23:07:37.835956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59800 ] 00:06:05.635 [2024-11-25 23:07:37.991887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.894 [2024-11-25 23:07:38.089166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.461 23:07:38 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.461 23:07:38 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:06.461 23:07:38 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:06.461 23:07:38 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:06.461 23:07:38 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:06.461 23:07:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:06.461 23:07:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.461 23:07:38 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:06.461 23:07:38 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.461 23:07:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:06.764 23:07:39 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:06.764 23:07:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:06.765 23:07:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "833622e9-c7a4-4a71-ae48-4cd76191770d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "833622e9-c7a4-4a71-ae48-4cd76191770d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fee71368-2ecb-4aa0-8ce2-3e59ad911aca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fee71368-2ecb-4aa0-8ce2-3e59ad911aca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "61189392-5d5b-4724-bf4d-b669896366a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61189392-5d5b-4724-bf4d-b669896366a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fcb4a822-df25-4f1e-8d28-dda73bd0f712"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fcb4a822-df25-4f1e-8d28-dda73bd0f712",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1e449e41-3717-40fb-b587-4b8b64a1aa51"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e449e41-3717-40fb-b587-4b8b64a1aa51",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d11a1f39-978c-4bc9-ad42-0571716601bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d11a1f39-978c-4bc9-ad42-0571716601bb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:07.022 23:07:39 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:07.022 23:07:39 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:07.022 23:07:39 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:07.022 23:07:39 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59800 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59800 ']' 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59800 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59800 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59800' 00:06:07.022 killing process with pid 59800 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59800 00:06:07.022 23:07:39 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59800 00:06:08.401 23:07:40 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:08.401 23:07:40 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:08.401 23:07:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:08.401 23:07:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.401 23:07:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.401 ************************************ 00:06:08.401 START TEST bdev_hello_world 00:06:08.401 ************************************ 00:06:08.401 23:07:40 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:08.662 [2024-11-25 23:07:40.791932] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:08.662 [2024-11-25 23:07:40.792048] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59884 ] 00:06:08.662 [2024-11-25 23:07:40.952339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.923 [2024-11-25 23:07:41.049613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.501 [2024-11-25 23:07:41.597956] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:09.501 [2024-11-25 23:07:41.597998] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:09.501 [2024-11-25 23:07:41.598015] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:09.501 [2024-11-25 23:07:41.600384] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:09.501 [2024-11-25 23:07:41.601047] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:09.501 [2024-11-25 23:07:41.601082] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:09.501 [2024-11-25 23:07:41.601622] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:09.501 00:06:09.501 [2024-11-25 23:07:41.601645] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:10.073 00:06:10.073 real 0m1.577s 00:06:10.073 user 0m1.286s 00:06:10.073 sys 0m0.182s 00:06:10.073 23:07:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.073 23:07:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:10.073 ************************************ 00:06:10.073 END TEST bdev_hello_world 00:06:10.073 ************************************ 00:06:10.073 23:07:42 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:10.073 23:07:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.073 23:07:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.073 23:07:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.073 ************************************ 00:06:10.073 START TEST bdev_bounds 00:06:10.073 ************************************ 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59920 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59920' 00:06:10.073 Process bdevio pid: 59920 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59920 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59920 ']' 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.073 23:07:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.073 [2024-11-25 23:07:42.429420] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:10.073 [2024-11-25 23:07:42.429538] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:06:10.334 [2024-11-25 23:07:42.589407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.334 [2024-11-25 23:07:42.691730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.334 [2024-11-25 23:07:42.692022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.334 [2024-11-25 23:07:42.692120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.277 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.277 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:11.277 23:07:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:11.277 I/O targets: 00:06:11.277 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:11.277 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:11.277 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.277 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.277 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.277 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:11.277 00:06:11.277 00:06:11.277 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.277 http://cunit.sourceforge.net/ 00:06:11.277 00:06:11.277 00:06:11.277 Suite: bdevio tests on: Nvme3n1 00:06:11.277 Test: blockdev write read block ...passed 00:06:11.277 Test: blockdev write zeroes read block ...passed 00:06:11.277 Test: blockdev write zeroes read no split ...passed 00:06:11.277 Test: blockdev write zeroes read split ...passed 00:06:11.277 Test: blockdev write zeroes read split partial ...passed 00:06:11.277 Test: blockdev reset ...[2024-11-25 23:07:43.451390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:11.277 passed 00:06:11.277 Test: blockdev write read 8 blocks ...[2024-11-25 23:07:43.455606] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:11.277 passed 00:06:11.277 Test: blockdev write read size > 128k ...passed 00:06:11.277 Test: blockdev write read invalid size ...passed 00:06:11.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.277 Test: blockdev write read max offset ...passed 00:06:11.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.277 Test: blockdev writev readv 8 blocks ...passed 00:06:11.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.277 Test: blockdev writev readv block ...passed 00:06:11.277 Test: blockdev writev readv size > 128k ...passed 00:06:11.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.277 Test: blockdev comparev and writev ...[2024-11-25 23:07:43.474655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b100a000 len:0x1000 00:06:11.277 [2024-11-25 23:07:43.474702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.277 passed 00:06:11.277 Test: blockdev nvme passthru rw ...passed 00:06:11.277 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:07:43.477425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.277 [2024-11-25 23:07:43.477458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.277 passed 00:06:11.277 Test: blockdev nvme admin passthru ...passed 00:06:11.277 Test: blockdev copy ...passed 00:06:11.277 Suite: bdevio tests on: Nvme2n3 00:06:11.277 Test: blockdev write read block ...passed 00:06:11.277 Test: blockdev write zeroes read block ...passed 00:06:11.277 Test: blockdev write zeroes read no split ...passed 00:06:11.277 Test: blockdev write zeroes read split ...passed 00:06:11.277 Test: blockdev write zeroes read split partial ...passed 00:06:11.277 Test: blockdev reset ...[2024-11-25 23:07:43.535708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.277 passed 00:06:11.277 Test: blockdev write read 8 blocks ...[2024-11-25 23:07:43.538862] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:11.277 passed 00:06:11.277 Test: blockdev write read size > 128k ...passed 00:06:11.277 Test: blockdev write read invalid size ...passed 00:06:11.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.277 Test: blockdev write read max offset ...passed 00:06:11.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.277 Test: blockdev writev readv 8 blocks ...passed 00:06:11.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.277 Test: blockdev writev readv block ...passed 00:06:11.277 Test: blockdev writev readv size > 128k ...passed 00:06:11.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.277 Test: blockdev comparev and writev ...[2024-11-25 23:07:43.557649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x294a06000 len:0x1000 00:06:11.277 [2024-11-25 23:07:43.557784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.277 passed 00:06:11.277 Test: blockdev nvme passthru rw ...passed 00:06:11.277 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.277 Test: blockdev nvme admin passthru ...[2024-11-25 23:07:43.560576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.277 [2024-11-25 23:07:43.560651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.277 passed 00:06:11.277 Test: blockdev copy ...passed 00:06:11.277 Suite: bdevio tests on: Nvme2n2 00:06:11.277 Test: blockdev write read block ...passed 00:06:11.277 Test: blockdev write zeroes read block ...passed 00:06:11.277 Test: blockdev write zeroes read no split ...passed 00:06:11.277 Test: blockdev write zeroes read split ...passed 00:06:11.277 Test: blockdev write zeroes read split partial ...passed 00:06:11.277 Test: blockdev reset ...[2024-11-25 23:07:43.617647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.277 [2024-11-25 23:07:43.621817] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:11.277 passed 00:06:11.277 Test: blockdev write read 8 blocks ...passed 00:06:11.277 Test: blockdev write read size > 128k ...passed 00:06:11.277 Test: blockdev write read invalid size ...passed 00:06:11.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.277 Test: blockdev write read max offset ...passed 00:06:11.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.277 Test: blockdev writev readv 8 blocks ...passed 00:06:11.277 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.277 Test: blockdev writev readv block ...passed 00:06:11.277 Test: blockdev writev readv size > 128k ...passed 00:06:11.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.278 Test: blockdev comparev and writev ...[2024-11-25 23:07:43.639872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a3c000 len:0x1000 00:06:11.278 [2024-11-25 23:07:43.639929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.278 passed 00:06:11.540 Test: blockdev nvme passthru rw ...passed 00:06:11.540 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:07:43.642702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.540 [2024-11-25 23:07:43.642734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.540 passed 00:06:11.540 Test: blockdev nvme admin passthru ...passed 00:06:11.540 Test: blockdev copy ...passed 00:06:11.540 Suite: bdevio tests on: Nvme2n1 00:06:11.540 Test: blockdev write read block ...passed 00:06:11.540 Test: blockdev write zeroes read block ...passed 00:06:11.540 Test: blockdev write zeroes read no split ...passed 00:06:11.540 Test: blockdev write zeroes read split ...passed 00:06:11.540 Test: blockdev write zeroes read split partial ...passed 00:06:11.540 Test: blockdev reset ...[2024-11-25 23:07:43.700460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.540 [2024-11-25 23:07:43.704377] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:11.540 passed 00:06:11.540 Test: blockdev write read 8 blocks ...passed 00:06:11.540 Test: blockdev write read size > 128k ...passed 00:06:11.540 Test: blockdev write read invalid size ...passed 00:06:11.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.540 Test: blockdev write read max offset ...passed 00:06:11.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.540 Test: blockdev writev readv 8 blocks ...passed 00:06:11.540 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.540 Test: blockdev writev readv block ...passed 00:06:11.540 Test: blockdev writev readv size > 128k ...passed 00:06:11.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.540 Test: blockdev comparev and writev ...[2024-11-25 23:07:43.722804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a38000 len:0x1000 00:06:11.540 [2024-11-25 23:07:43.722873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.540 passed 00:06:11.540 Test: blockdev nvme passthru rw ...passed 00:06:11.540 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:07:43.725354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.540 [2024-11-25 23:07:43.725396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.540 passed 00:06:11.540 Test: blockdev nvme admin passthru ...passed 00:06:11.540 Test: blockdev copy ...passed 00:06:11.540 Suite: bdevio tests on: Nvme1n1 00:06:11.540 Test: blockdev write read block ...passed 00:06:11.540 Test: blockdev write zeroes read block ...passed 00:06:11.540 Test: blockdev write zeroes read no split ...passed 00:06:11.540 Test: blockdev write zeroes read split ...passed 00:06:11.540 Test: blockdev write zeroes read split partial ...passed 00:06:11.540 Test: blockdev reset ...[2024-11-25 23:07:43.782384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:11.540 passed 00:06:11.540 Test: blockdev write read 8 blocks ...[2024-11-25 23:07:43.786222] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:11.540 passed 00:06:11.540 Test: blockdev write read size > 128k ...passed 00:06:11.540 Test: blockdev write read invalid size ...passed 00:06:11.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.540 Test: blockdev write read max offset ...passed 00:06:11.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.540 Test: blockdev writev readv 8 blocks ...passed 00:06:11.540 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.540 Test: blockdev writev readv block ...passed 00:06:11.540 Test: blockdev writev readv size > 128k ...passed 00:06:11.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.540 Test: blockdev comparev and writev ...[2024-11-25 23:07:43.804003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5a34000 len:0x1000 00:06:11.540 [2024-11-25 23:07:43.804049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.540 passed 00:06:11.540 Test: blockdev nvme passthru rw ...passed 00:06:11.540 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:07:43.806214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.540 [2024-11-25 23:07:43.806248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.540 passed 00:06:11.540 Test: blockdev nvme admin passthru ...passed 00:06:11.540 Test: blockdev copy ...passed 00:06:11.540 Suite: bdevio tests on: Nvme0n1 00:06:11.540 Test: blockdev write read block ...passed 00:06:11.540 Test: blockdev write zeroes read block ...passed 00:06:11.540 Test: blockdev write zeroes read no split ...passed 00:06:11.540 Test: blockdev write zeroes read split ...passed 00:06:11.540 Test: blockdev write zeroes read split partial ...passed 00:06:11.540 Test: blockdev reset ...[2024-11-25 23:07:43.865546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:11.540 [2024-11-25 23:07:43.869345] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:11.540 passed 00:06:11.540 Test: blockdev write read 8 blocks ...passed 00:06:11.540 Test: blockdev write read size > 128k ...passed 00:06:11.540 Test: blockdev write read invalid size ...passed 00:06:11.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.540 Test: blockdev write read max offset ...passed 00:06:11.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.540 Test: blockdev writev readv 8 blocks ...passed 00:06:11.540 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.540 Test: blockdev writev readv block ...passed 00:06:11.540 Test: blockdev writev readv size > 128k ...passed 00:06:11.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.540 Test: blockdev comparev and writev ...passed 00:06:11.540 Test: blockdev nvme passthru rw ...[2024-11-25 23:07:43.884508] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:11.540 separate metadata which is not supported yet. 00:06:11.540 passed 00:06:11.540 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.541 Test: blockdev nvme admin passthru ...[2024-11-25 23:07:43.886278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:11.541 [2024-11-25 23:07:43.886331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:11.541 passed 00:06:11.541 Test: blockdev copy ...passed 00:06:11.541 00:06:11.541 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.541 suites 6 6 n/a 0 0 00:06:11.541 tests 138 138 138 0 0 00:06:11.541 asserts 893 893 893 0 n/a 00:06:11.541 00:06:11.541 Elapsed time = 1.258 seconds 00:06:11.541 0 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59920 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59920 ']' 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59920 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59920 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.802 killing process with pid 59920 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59920' 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59920 00:06:11.802 23:07:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59920 00:06:12.374 23:07:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:12.374 00:06:12.375 real 0m2.257s 00:06:12.375 user 0m5.679s 00:06:12.375 sys 0m0.310s 00:06:12.375 23:07:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.375 ************************************ 00:06:12.375 END TEST bdev_bounds 00:06:12.375 ************************************ 00:06:12.375 23:07:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:12.375 23:07:44 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:12.375 23:07:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:12.375 23:07:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.375 23:07:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.375 ************************************ 00:06:12.375 START TEST bdev_nbd 00:06:12.375 ************************************ 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59980 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59980 /var/tmp/spdk-nbd.sock 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59980 ']' 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.375 23:07:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:12.637 [2024-11-25 23:07:44.784515] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:12.637 [2024-11-25 23:07:44.784693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.637 [2024-11-25 23:07:44.953216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.898 [2024-11-25 23:07:45.082652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.470 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.732 1+0 records in 00:06:13.732 1+0 records out 00:06:13.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00095392 s, 4.3 MB/s 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.732 23:07:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.994 1+0 records in 00:06:13.994 1+0 records out 00:06:13.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115399 s, 3.5 MB/s 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.994 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.255 1+0 records in 00:06:14.255 1+0 records out 00:06:14.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980578 s, 4.2 MB/s 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.255 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.517 1+0 records in 00:06:14.517 1+0 records out 00:06:14.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106653 s, 3.8 MB/s 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.517 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.778 1+0 records in 00:06:14.778 1+0 records out 00:06:14.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109753 s, 3.7 MB/s 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.778 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.779 23:07:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.041 1+0 records in 00:06:15.041 1+0 records out 00:06:15.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131787 s, 3.1 MB/s 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.041 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd0", 00:06:15.303 "bdev_name": "Nvme0n1" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd1", 00:06:15.303 "bdev_name": "Nvme1n1" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd2", 00:06:15.303 "bdev_name": "Nvme2n1" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd3", 00:06:15.303 "bdev_name": "Nvme2n2" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd4", 00:06:15.303 "bdev_name": "Nvme2n3" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd5", 00:06:15.303 "bdev_name": "Nvme3n1" 00:06:15.303 } 00:06:15.303 ]' 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd0", 00:06:15.303 "bdev_name": "Nvme0n1" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd1", 00:06:15.303 "bdev_name": "Nvme1n1" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd2", 00:06:15.303 "bdev_name": "Nvme2n1" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd3", 00:06:15.303 "bdev_name": "Nvme2n2" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd4", 00:06:15.303 "bdev_name": "Nvme2n3" 00:06:15.303 }, 00:06:15.303 { 00:06:15.303 "nbd_device": "/dev/nbd5", 00:06:15.303 "bdev_name": "Nvme3n1" 00:06:15.303 } 00:06:15.303 ]' 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.303 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:15.304 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.304 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.304 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.304 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.304 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.579 23:07:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.841 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.103 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.360 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.617 23:07:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.877 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:17.135 /dev/nbd0 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.135 1+0 records in 00:06:17.135 1+0 records out 00:06:17.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401542 s, 10.2 MB/s 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:17.135 /dev/nbd1 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.135 1+0 records in 00:06:17.135 1+0 records out 00:06:17.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440605 s, 9.3 MB/s 00:06:17.135 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:17.393 /dev/nbd10 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.393 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.393 1+0 records in 00:06:17.393 1+0 records out 00:06:17.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349709 s, 11.7 MB/s 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.394 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:17.655 /dev/nbd11 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.655 1+0 records in 00:06:17.655 1+0 records out 00:06:17.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832174 s, 4.9 MB/s 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.655 23:07:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:17.915 /dev/nbd12 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.915 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.916 1+0 records in 00:06:17.916 1+0 records out 00:06:17.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102929 s, 4.0 MB/s 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.916 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:18.174 /dev/nbd13 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.175 1+0 records in 00:06:18.175 1+0 records out 00:06:18.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103496 s, 4.0 MB/s 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.175 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd0", 00:06:18.434 "bdev_name": "Nvme0n1" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd1", 00:06:18.434 "bdev_name": "Nvme1n1" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd10", 00:06:18.434 "bdev_name": "Nvme2n1" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd11", 00:06:18.434 "bdev_name": "Nvme2n2" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd12", 00:06:18.434 "bdev_name": "Nvme2n3" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd13", 00:06:18.434 "bdev_name": "Nvme3n1" 00:06:18.434 } 00:06:18.434 ]' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd0", 00:06:18.434 "bdev_name": "Nvme0n1" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd1", 00:06:18.434 "bdev_name": "Nvme1n1" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd10", 00:06:18.434 "bdev_name": "Nvme2n1" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd11", 00:06:18.434 "bdev_name": "Nvme2n2" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd12", 00:06:18.434 "bdev_name": "Nvme2n3" 00:06:18.434 }, 00:06:18.434 { 00:06:18.434 "nbd_device": "/dev/nbd13", 00:06:18.434 "bdev_name": "Nvme3n1" 00:06:18.434 } 00:06:18.434 ]' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.434 /dev/nbd1 00:06:18.434 /dev/nbd10 00:06:18.434 /dev/nbd11 00:06:18.434 /dev/nbd12 00:06:18.434 /dev/nbd13' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.434 /dev/nbd1 00:06:18.434 /dev/nbd10 00:06:18.434 /dev/nbd11 00:06:18.434 /dev/nbd12 00:06:18.434 /dev/nbd13' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:18.434 256+0 records in 00:06:18.434 256+0 records out 00:06:18.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596945 s, 176 MB/s 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.434 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.693 256+0 records in 00:06:18.693 256+0 records out 00:06:18.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0856704 s, 12.2 MB/s 00:06:18.693 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.693 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.693 256+0 records in 00:06:18.693 256+0 records out 00:06:18.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139676 s, 7.5 MB/s 00:06:18.693 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.693 23:07:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:18.952 256+0 records in 00:06:18.952 256+0 records out 00:06:18.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152462 s, 6.9 MB/s 00:06:18.952 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.952 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:18.952 256+0 records in 00:06:18.952 256+0 records out 00:06:18.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138236 s, 7.6 MB/s 00:06:18.952 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.952 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:19.211 256+0 records in 00:06:19.211 256+0 records out 00:06:19.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146372 s, 7.2 MB/s 00:06:19.211 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.211 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:19.471 256+0 records in 00:06:19.471 256+0 records out 00:06:19.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228762 s, 4.6 MB/s 00:06:19.471 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:19.471 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.471 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.471 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.471 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.472 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.732 23:07:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.993 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.253 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.513 23:07:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.787 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.050 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:21.311 malloc_lvol_verify 00:06:21.311 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:21.572 55e1ff40-4a04-45fe-bc65-4950adf8acef 00:06:21.572 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:21.831 b7cf8e70-0181-4afb-9ecd-23eb049f6e3e 00:06:21.831 23:07:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:21.831 /dev/nbd0 00:06:21.831 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:21.831 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:21.831 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:21.831 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:21.831 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:21.831 mke2fs 1.47.0 (5-Feb-2023) 00:06:21.831 Discarding device blocks: 0/4096 done 00:06:21.831 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:21.831 00:06:21.831 Allocating group tables: 0/1 done 00:06:21.831 Writing inode tables: 0/1 done 00:06:21.831 Creating journal (1024 blocks): done 00:06:21.831 Writing superblocks and filesystem accounting information: 0/1 done 00:06:21.831 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59980 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59980 ']' 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59980 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59980 00:06:22.092 killing process with pid 59980 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59980' 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59980 00:06:22.092 23:07:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59980 00:06:23.037 23:07:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.037 00:06:23.037 real 0m10.511s 00:06:23.037 user 0m14.605s 00:06:23.037 sys 0m3.473s 00:06:23.037 23:07:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.037 ************************************ 00:06:23.037 END TEST bdev_nbd 00:06:23.037 ************************************ 00:06:23.037 23:07:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.037 23:07:55 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:23.037 23:07:55 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:23.037 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.037 23:07:55 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.037 23:07:55 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.037 23:07:55 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.037 23:07:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.037 23:07:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.037 23:07:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.037 ************************************ 00:06:23.037 START TEST bdev_verify 00:06:23.037 ************************************ 00:06:23.037 23:07:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.037 [2024-11-25 23:07:55.334995] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:23.037 [2024-11-25 23:07:55.335134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:06:23.297 [2024-11-25 23:07:55.496406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.297 [2024-11-25 23:07:55.598847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.297 [2024-11-25 23:07:55.598848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.878 Running I/O for 5 seconds... 00:06:26.211 20608.00 IOPS, 80.50 MiB/s [2024-11-25T23:07:59.560Z] 21440.00 IOPS, 83.75 MiB/s [2024-11-25T23:08:00.501Z] 20928.00 IOPS, 81.75 MiB/s [2024-11-25T23:08:01.440Z] 20656.00 IOPS, 80.69 MiB/s [2024-11-25T23:08:01.440Z] 20556.80 IOPS, 80.30 MiB/s 00:06:29.071 Latency(us) 00:06:29.071 [2024-11-25T23:08:01.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.071 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.071 Nvme0n1 : 5.05 1674.04 6.54 0.00 0.00 76126.88 13409.67 99211.42 00:06:29.071 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.071 Nvme0n1 : 5.08 1713.21 6.69 0.00 0.00 74510.65 12703.90 88322.36 00:06:29.071 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x0 length 0xa0000 00:06:29.071 Nvme1n1 : 5.08 1676.24 6.55 0.00 0.00 75906.93 7410.61 91145.45 00:06:29.071 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0xa0000 length 0xa0000 00:06:29.071 Nvme1n1 : 5.08 1712.72 6.69 0.00 0.00 74214.30 15526.99 68157.44 00:06:29.071 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x0 length 0x80000 00:06:29.071 Nvme2n1 : 5.08 1675.05 6.54 0.00 0.00 75740.72 9527.93 83482.78 00:06:29.071 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x80000 length 0x80000 00:06:29.071 Nvme2n1 : 5.09 1710.98 6.68 0.00 0.00 73971.03 18047.61 58881.58 00:06:29.071 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x0 length 0x80000 00:06:29.071 Nvme2n2 : 5.09 1683.35 6.58 0.00 0.00 75326.97 8469.27 66544.25 00:06:29.071 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x80000 length 0x80000 00:06:29.071 Nvme2n2 : 5.09 1710.51 6.68 0.00 0.00 73827.91 17241.01 60494.77 00:06:29.071 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x0 length 0x80000 00:06:29.071 Nvme2n3 : 5.10 1682.32 6.57 0.00 0.00 75186.86 10485.76 65334.35 00:06:29.071 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x80000 length 0x80000 00:06:29.071 Nvme2n3 : 5.09 1710.05 6.68 0.00 0.00 73704.71 12754.31 63721.16 00:06:29.071 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x0 length 0x20000 00:06:29.071 Nvme3n1 : 5.10 1681.36 6.57 0.00 0.00 75061.03 12250.19 65334.35 00:06:29.071 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.071 Verification LBA range: start 0x20000 length 0x20000 00:06:29.071 Nvme3n1 : 5.10 1720.15 6.72 0.00 0.00 73244.48 2545.82 64931.05 00:06:29.071 [2024-11-25T23:08:01.440Z] =================================================================================================================== 00:06:29.071 [2024-11-25T23:08:01.440Z] Total : 20349.98 79.49 0.00 0.00 74725.11 2545.82 99211.42 00:06:30.455 00:06:30.455 real 0m7.188s 00:06:30.455 user 0m13.418s 00:06:30.455 sys 0m0.239s 00:06:30.455 ************************************ 00:06:30.455 END TEST bdev_verify 00:06:30.455 ************************************ 00:06:30.455 23:08:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.455 23:08:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.455 23:08:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.455 23:08:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:30.455 23:08:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.455 23:08:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.455 ************************************ 00:06:30.455 START TEST bdev_verify_big_io 00:06:30.455 ************************************ 00:06:30.455 23:08:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.455 [2024-11-25 23:08:02.585462] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:30.455 [2024-11-25 23:08:02.585578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:06:30.455 [2024-11-25 23:08:02.747551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.717 [2024-11-25 23:08:02.846844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.717 [2024-11-25 23:08:02.846922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.303 Running I/O for 5 seconds... 00:06:36.380 144.00 IOPS, 9.00 MiB/s [2024-11-25T23:08:09.689Z] 1865.50 IOPS, 116.59 MiB/s [2024-11-25T23:08:09.689Z] 2351.67 IOPS, 146.98 MiB/s 00:06:37.320 Latency(us) 00:06:37.320 [2024-11-25T23:08:09.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.320 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.320 Verification LBA range: start 0x0 length 0xbd0b 00:06:37.320 Nvme0n1 : 5.88 105.60 6.60 0.00 0.00 1159230.77 21878.94 1238932.87 00:06:37.320 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.320 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:37.320 Nvme0n1 : 5.88 104.37 6.52 0.00 0.00 1158786.66 17845.96 1245385.65 00:06:37.320 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.320 Verification LBA range: start 0x0 length 0xa000 00:06:37.320 Nvme1n1 : 5.88 104.89 6.56 0.00 0.00 1123454.21 92355.35 1038896.84 00:06:37.320 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.320 Verification LBA range: start 0xa000 length 0xa000 00:06:37.320 Nvme1n1 : 5.89 108.74 6.80 0.00 0.00 1093125.99 112116.97 1019538.51 00:06:37.320 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.320 Verification LBA range: start 0x0 length 0x8000 00:06:37.320 Nvme2n1 : 5.89 108.75 6.80 0.00 0.00 1059978.63 120182.94 1058255.16 00:06:37.320 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.320 Verification LBA range: start 0x8000 length 0x8000 00:06:37.320 Nvme2n1 : 5.89 108.65 6.79 0.00 0.00 1050675.67 128248.91 903388.55 00:06:37.320 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.321 Verification LBA range: start 0x0 length 0x8000 00:06:37.321 Nvme2n2 : 5.98 110.49 6.91 0.00 0.00 999409.28 93565.24 1058255.16 00:06:37.321 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.321 Verification LBA range: start 0x8000 length 0x8000 00:06:37.321 Nvme2n2 : 6.01 114.51 7.16 0.00 0.00 970055.61 29642.44 1451874.46 00:06:37.321 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.321 Verification LBA range: start 0x0 length 0x8000 00:06:37.321 Nvme2n3 : 6.11 121.65 7.60 0.00 0.00 881550.86 45169.43 1071160.71 00:06:37.321 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.321 Verification LBA range: start 0x8000 length 0x8000 00:06:37.321 Nvme2n3 : 6.06 113.20 7.07 0.00 0.00 943183.27 49605.71 2116510.33 00:06:37.321 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.321 Verification LBA range: start 0x0 length 0x2000 00:06:37.321 Nvme3n1 : 6.12 136.01 8.50 0.00 0.00 766778.05 1033.45 1109877.37 00:06:37.321 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.321 Verification LBA range: start 0x2000 length 0x2000 00:06:37.321 Nvme3n1 : 6.13 133.00 8.31 0.00 0.00 777400.71 749.88 2155226.98 00:06:37.321 [2024-11-25T23:08:09.690Z] =================================================================================================================== 00:06:37.321 [2024-11-25T23:08:09.690Z] Total : 1369.86 85.62 0.00 0.00 985303.49 749.88 2155226.98 00:06:38.719 00:06:38.719 real 0m8.495s 00:06:38.719 user 0m16.074s 00:06:38.719 sys 0m0.218s 00:06:38.719 23:08:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.719 ************************************ 00:06:38.719 END TEST bdev_verify_big_io 00:06:38.719 ************************************ 00:06:38.719 23:08:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:38.719 23:08:11 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.719 23:08:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:38.719 23:08:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.719 23:08:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.719 ************************************ 00:06:38.719 START TEST bdev_write_zeroes 00:06:38.719 ************************************ 00:06:38.719 23:08:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.981 [2024-11-25 23:08:11.140739] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:38.981 [2024-11-25 23:08:11.140858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:06:38.981 [2024-11-25 23:08:11.297577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.241 [2024-11-25 23:08:11.397558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.811 Running I/O for 1 seconds... 00:06:40.778 56448.00 IOPS, 220.50 MiB/s 00:06:40.778 Latency(us) 00:06:40.778 [2024-11-25T23:08:13.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.778 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.778 Nvme0n1 : 1.02 9383.06 36.65 0.00 0.00 13615.52 5873.03 28835.84 00:06:40.778 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.778 Nvme1n1 : 1.02 9372.34 36.61 0.00 0.00 13613.75 8922.98 25306.98 00:06:40.778 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.778 Nvme2n1 : 1.03 9361.61 36.57 0.00 0.00 13544.79 8822.15 21677.29 00:06:40.778 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.778 Nvme2n2 : 1.03 9351.08 36.53 0.00 0.00 13530.46 8771.74 21374.82 00:06:40.778 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.778 Nvme2n3 : 1.03 9340.58 36.49 0.00 0.00 13519.13 8771.74 21778.12 00:06:40.778 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.778 Nvme3n1 : 1.03 9330.02 36.45 0.00 0.00 13511.96 8318.03 23592.96 00:06:40.778 [2024-11-25T23:08:13.148Z] =================================================================================================================== 00:06:40.779 [2024-11-25T23:08:13.148Z] Total : 56138.69 219.29 0.00 0.00 13555.93 5873.03 28835.84 00:06:41.723 00:06:41.723 real 0m2.662s 00:06:41.723 user 0m2.368s 00:06:41.723 sys 0m0.178s 00:06:41.723 ************************************ 00:06:41.723 END TEST bdev_write_zeroes 00:06:41.723 ************************************ 00:06:41.723 23:08:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.723 23:08:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:41.723 23:08:13 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.723 23:08:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:41.723 23:08:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.723 23:08:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.723 ************************************ 00:06:41.723 START TEST bdev_json_nonenclosed 00:06:41.723 ************************************ 00:06:41.723 23:08:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.723 [2024-11-25 23:08:13.868674] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:41.724 [2024-11-25 23:08:13.868793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60621 ] 00:06:41.724 [2024-11-25 23:08:14.028242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.994 [2024-11-25 23:08:14.124991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.994 [2024-11-25 23:08:14.125077] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:41.994 [2024-11-25 23:08:14.125095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:41.994 [2024-11-25 23:08:14.125105] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.994 00:06:41.994 real 0m0.493s 00:06:41.994 user 0m0.293s 00:06:41.994 sys 0m0.096s 00:06:41.994 23:08:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.994 23:08:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:41.994 ************************************ 00:06:41.994 END TEST bdev_json_nonenclosed 00:06:41.994 ************************************ 00:06:41.994 23:08:14 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.994 23:08:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:41.994 23:08:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.994 23:08:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.994 ************************************ 00:06:41.994 START TEST bdev_json_nonarray 00:06:41.994 ************************************ 00:06:41.994 23:08:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.255 [2024-11-25 23:08:14.419776] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:42.255 [2024-11-25 23:08:14.419890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:06:42.255 [2024-11-25 23:08:14.579981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.517 [2024-11-25 23:08:14.678706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.517 [2024-11-25 23:08:14.678793] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:42.517 [2024-11-25 23:08:14.678810] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.517 [2024-11-25 23:08:14.678819] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.517 00:06:42.517 real 0m0.500s 00:06:42.517 user 0m0.305s 00:06:42.517 sys 0m0.091s 00:06:42.517 ************************************ 00:06:42.517 END TEST bdev_json_nonarray 00:06:42.517 ************************************ 00:06:42.517 23:08:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.517 23:08:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:42.779 23:08:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:42.779 00:06:42.779 real 0m37.278s 00:06:42.779 user 0m57.238s 00:06:42.779 sys 0m5.514s 00:06:42.779 23:08:14 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.779 23:08:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.779 ************************************ 00:06:42.779 END TEST blockdev_nvme 00:06:42.779 ************************************ 00:06:42.779 23:08:14 -- spdk/autotest.sh@209 -- # uname -s 00:06:42.779 23:08:14 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:42.779 23:08:14 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:42.779 23:08:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.779 23:08:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.779 23:08:14 -- common/autotest_common.sh@10 -- # set +x 00:06:42.779 ************************************ 00:06:42.779 START TEST blockdev_nvme_gpt 00:06:42.779 ************************************ 00:06:42.779 23:08:14 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:42.779 * Looking for test storage... 00:06:42.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.779 23:08:15 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.779 --rc genhtml_branch_coverage=1 00:06:42.779 --rc genhtml_function_coverage=1 00:06:42.779 --rc genhtml_legend=1 00:06:42.779 --rc geninfo_all_blocks=1 00:06:42.779 --rc geninfo_unexecuted_blocks=1 00:06:42.779 00:06:42.779 ' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.779 --rc genhtml_branch_coverage=1 00:06:42.779 --rc genhtml_function_coverage=1 00:06:42.779 --rc genhtml_legend=1 00:06:42.779 --rc geninfo_all_blocks=1 00:06:42.779 --rc geninfo_unexecuted_blocks=1 00:06:42.779 00:06:42.779 ' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.779 --rc genhtml_branch_coverage=1 00:06:42.779 --rc genhtml_function_coverage=1 00:06:42.779 --rc genhtml_legend=1 00:06:42.779 --rc geninfo_all_blocks=1 00:06:42.779 --rc geninfo_unexecuted_blocks=1 00:06:42.779 00:06:42.779 ' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.779 --rc genhtml_branch_coverage=1 00:06:42.779 --rc genhtml_function_coverage=1 00:06:42.779 --rc genhtml_legend=1 00:06:42.779 --rc geninfo_all_blocks=1 00:06:42.779 --rc geninfo_unexecuted_blocks=1 00:06:42.779 00:06:42.779 ' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:42.779 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60725 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60725 00:06:42.780 23:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:42.780 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60725 ']' 00:06:42.780 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.780 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.780 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.780 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.780 23:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.041 [2024-11-25 23:08:15.245483] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:43.041 [2024-11-25 23:08:15.245670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60725 ] 00:06:43.302 [2024-11-25 23:08:15.414702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.302 [2024-11-25 23:08:15.515512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.888 23:08:16 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.888 23:08:16 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:43.888 23:08:16 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:43.888 23:08:16 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:43.888 23:08:16 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:44.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.409 Waiting for block devices as requested 00:06:44.409 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.409 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.409 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.668 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.955 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:49.955 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:49.956 BYT; 00:06:49.956 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:49.956 BYT; 00:06:49.956 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:49.956 23:08:21 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:49.956 23:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:49.956 23:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:49.956 23:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:49.956 23:08:22 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:49.956 23:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:49.956 23:08:22 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:50.908 The operation has completed successfully. 00:06:50.909 23:08:23 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:51.939 The operation has completed successfully. 00:06:51.939 23:08:24 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:52.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.771 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:52.771 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.031 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.031 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.031 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:53.031 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.031 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.031 [] 00:06:53.031 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.031 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:53.031 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:53.031 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:53.031 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:53.031 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:53.031 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.031 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:53.290 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.290 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8338fa51-10a5-4131-baf9-eec3c213b8a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8338fa51-10a5-4131-baf9-eec3c213b8a4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4bab1da1-1f7c-498f-a8e9-b341ba89a39a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bab1da1-1f7c-498f-a8e9-b341ba89a39a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "80696871-cf40-454f-bf7e-c25df7e563d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80696871-cf40-454f-bf7e-c25df7e563d9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c38308cf-84ca-4157-9f29-e4341cd65e89"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c38308cf-84ca-4157-9f29-e4341cd65e89",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5dbc11e2-b63b-4827-9b0a-e7edd87594ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5dbc11e2-b63b-4827-9b0a-e7edd87594ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:53.552 23:08:25 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60725 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60725 ']' 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60725 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60725 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.552 killing process with pid 60725 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60725' 00:06:53.552 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60725 00:06:53.553 23:08:25 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60725 00:06:54.929 23:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:54.929 23:08:26 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:54.929 23:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:54.929 23:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.929 23:08:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.929 ************************************ 00:06:54.929 START TEST bdev_hello_world 00:06:54.929 ************************************ 00:06:54.929 23:08:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:54.929 [2024-11-25 23:08:27.031920] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:54.929 [2024-11-25 23:08:27.032033] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61350 ] 00:06:54.929 [2024-11-25 23:08:27.193203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.205 [2024-11-25 23:08:27.294852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.784 [2024-11-25 23:08:27.847579] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:55.784 [2024-11-25 23:08:27.847624] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:55.784 [2024-11-25 23:08:27.847646] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:55.784 [2024-11-25 23:08:27.850248] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:55.784 [2024-11-25 23:08:27.851399] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:55.784 [2024-11-25 23:08:27.851429] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:55.784 [2024-11-25 23:08:27.852335] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:55.784 00:06:55.784 [2024-11-25 23:08:27.852360] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:56.354 00:06:56.354 real 0m1.607s 00:06:56.354 user 0m1.319s 00:06:56.354 sys 0m0.179s 00:06:56.354 23:08:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.354 ************************************ 00:06:56.354 END TEST bdev_hello_world 00:06:56.354 ************************************ 00:06:56.354 23:08:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:56.354 23:08:28 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:56.354 23:08:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.354 23:08:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.354 23:08:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.354 ************************************ 00:06:56.354 START TEST bdev_bounds 00:06:56.354 ************************************ 00:06:56.354 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:56.354 23:08:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61386 00:06:56.354 23:08:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.354 Process bdevio pid: 61386 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61386' 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61386 00:06:56.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61386 ']' 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:56.355 23:08:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.355 [2024-11-25 23:08:28.707860] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:56.355 [2024-11-25 23:08:28.707977] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61386 ] 00:06:56.617 [2024-11-25 23:08:28.869069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.617 [2024-11-25 23:08:28.969906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.617 [2024-11-25 23:08:28.970444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.617 [2024-11-25 23:08:28.970462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.188 23:08:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.188 23:08:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:57.188 23:08:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:57.450 I/O targets: 00:06:57.450 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:57.450 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:57.450 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:57.450 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.450 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.450 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.450 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:57.450 00:06:57.450 00:06:57.450 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.450 http://cunit.sourceforge.net/ 00:06:57.450 00:06:57.450 00:06:57.450 Suite: bdevio tests on: Nvme3n1 00:06:57.450 Test: blockdev write read block ...passed 00:06:57.450 Test: blockdev write zeroes read block ...passed 00:06:57.450 Test: blockdev write zeroes read no split ...passed 00:06:57.450 Test: blockdev write zeroes read split ...passed 00:06:57.450 Test: blockdev write zeroes read split partial ...passed 00:06:57.450 Test: blockdev reset ...[2024-11-25 23:08:29.671586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:57.450 [2024-11-25 23:08:29.674756] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:57.450 passed 00:06:57.450 Test: blockdev write read 8 blocks ...passed 00:06:57.450 Test: blockdev write read size > 128k ...passed 00:06:57.450 Test: blockdev write read invalid size ...passed 00:06:57.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.450 Test: blockdev write read max offset ...passed 00:06:57.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.450 Test: blockdev writev readv 8 blocks ...passed 00:06:57.450 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.450 Test: blockdev writev readv block ...passed 00:06:57.450 Test: blockdev writev readv size > 128k ...passed 00:06:57.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.450 Test: blockdev comparev and writev ...[2024-11-25 23:08:29.686018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af004000 len:0x1000 00:06:57.450 [2024-11-25 23:08:29.686152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.450 passed 00:06:57.450 Test: blockdev nvme passthru rw ...passed 00:06:57.450 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:08:29.687626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.450 passed 00:06:57.450 Test: blockdev nvme admin passthru ...[2024-11-25 23:08:29.687719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.451 passed 00:06:57.451 Test: blockdev copy ...passed 00:06:57.451 Suite: bdevio tests on: Nvme2n3 00:06:57.451 Test: blockdev write read block ...passed 00:06:57.451 Test: blockdev write zeroes read block ...passed 00:06:57.451 Test: blockdev write zeroes read no split ...passed 00:06:57.451 Test: blockdev write zeroes read split ...passed 00:06:57.451 Test: blockdev write zeroes read split partial ...passed 00:06:57.451 Test: blockdev reset ...[2024-11-25 23:08:29.741592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:57.451 [2024-11-25 23:08:29.746865] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:57.451 passed 00:06:57.451 Test: blockdev write read 8 blocks ...passed 00:06:57.451 Test: blockdev write read size > 128k ...passed 00:06:57.451 Test: blockdev write read invalid size ...passed 00:06:57.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.451 Test: blockdev write read max offset ...passed 00:06:57.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.451 Test: blockdev writev readv 8 blocks ...passed 00:06:57.451 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.451 Test: blockdev writev readv block ...passed 00:06:57.451 Test: blockdev writev readv size > 128k ...passed 00:06:57.451 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.451 Test: blockdev comparev and writev ...[2024-11-25 23:08:29.764815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af002000 len:0x1000 00:06:57.451 [2024-11-25 23:08:29.764922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.451 passed 00:06:57.451 Test: blockdev nvme passthru rw ...passed 00:06:57.451 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:08:29.767422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.451 [2024-11-25 23:08:29.767502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.451 passed 00:06:57.451 Test: blockdev nvme admin passthru ...passed 00:06:57.451 Test: blockdev copy ...passed 00:06:57.451 Suite: bdevio tests on: Nvme2n2 00:06:57.451 Test: blockdev write read block ...passed 00:06:57.451 Test: blockdev write zeroes read block ...passed 00:06:57.451 Test: blockdev write zeroes read no split ...passed 00:06:57.451 Test: blockdev write zeroes read split ...passed 00:06:57.713 Test: blockdev write zeroes read split partial ...passed 00:06:57.713 Test: blockdev reset ...[2024-11-25 23:08:29.823597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:57.713 [2024-11-25 23:08:29.827251] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:57.713 passed 00:06:57.713 Test: blockdev write read 8 blocks ...passed 00:06:57.713 Test: blockdev write read size > 128k ...passed 00:06:57.713 Test: blockdev write read invalid size ...passed 00:06:57.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.713 Test: blockdev write read max offset ...passed 00:06:57.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.713 Test: blockdev writev readv 8 blocks ...passed 00:06:57.713 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.713 Test: blockdev writev readv block ...passed 00:06:57.713 Test: blockdev writev readv size > 128k ...passed 00:06:57.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.713 Test: blockdev comparev and writev ...[2024-11-25 23:08:29.842795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4438000 len:0x1000 00:06:57.713 [2024-11-25 23:08:29.842896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.713 passed 00:06:57.713 Test: blockdev nvme passthru rw ...passed 00:06:57.713 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:08:29.845400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.713 [2024-11-25 23:08:29.845426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.713 passed 00:06:57.713 Test: blockdev nvme admin passthru ...passed 00:06:57.713 Test: blockdev copy ...passed 00:06:57.713 Suite: bdevio tests on: Nvme2n1 00:06:57.713 Test: blockdev write read block ...passed 00:06:57.713 Test: blockdev write zeroes read block ...passed 00:06:57.713 Test: blockdev write zeroes read no split ...passed 00:06:57.713 Test: blockdev write zeroes read split ...passed 00:06:57.713 Test: blockdev write zeroes read split partial ...passed 00:06:57.713 Test: blockdev reset ...[2024-11-25 23:08:29.905672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:57.713 passed 00:06:57.713 Test: blockdev write read 8 blocks ...[2024-11-25 23:08:29.909228] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:57.713 passed 00:06:57.713 Test: blockdev write read size > 128k ...passed 00:06:57.713 Test: blockdev write read invalid size ...passed 00:06:57.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.713 Test: blockdev write read max offset ...passed 00:06:57.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.713 Test: blockdev writev readv 8 blocks ...passed 00:06:57.713 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.713 Test: blockdev writev readv block ...passed 00:06:57.713 Test: blockdev writev readv size > 128k ...passed 00:06:57.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.713 Test: blockdev comparev and writev ...[2024-11-25 23:08:29.925651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4434000 len:0x1000 00:06:57.713 [2024-11-25 23:08:29.925689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.713 passed 00:06:57.713 Test: blockdev nvme passthru rw ...passed 00:06:57.713 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:08:29.928251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.713 [2024-11-25 23:08:29.928278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.713 passed 00:06:57.713 Test: blockdev nvme admin passthru ...passed 00:06:57.713 Test: blockdev copy ...passed 00:06:57.713 Suite: bdevio tests on: Nvme1n1p2 00:06:57.713 Test: blockdev write read block ...passed 00:06:57.713 Test: blockdev write zeroes read block ...passed 00:06:57.713 Test: blockdev write zeroes read no split ...passed 00:06:57.713 Test: blockdev write zeroes read split ...passed 00:06:57.713 Test: blockdev write zeroes read split partial ...passed 00:06:57.713 Test: blockdev reset ...[2024-11-25 23:08:29.987666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:57.713 [2024-11-25 23:08:29.991296] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:57.713 passed 00:06:57.713 Test: blockdev write read 8 blocks ...passed 00:06:57.713 Test: blockdev write read size > 128k ...passed 00:06:57.713 Test: blockdev write read invalid size ...passed 00:06:57.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.714 Test: blockdev write read max offset ...passed 00:06:57.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.714 Test: blockdev writev readv 8 blocks ...passed 00:06:57.714 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.714 Test: blockdev writev readv block ...passed 00:06:57.714 Test: blockdev writev readv size > 128k ...passed 00:06:57.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.714 Test: blockdev comparev and writev ...[2024-11-25 23:08:30.009130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4430000 len:0x1000 00:06:57.714 [2024-11-25 23:08:30.009162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.714 passed 00:06:57.714 Test: blockdev nvme passthru rw ...passed 00:06:57.714 Test: blockdev nvme passthru vendor specific ...passed 00:06:57.714 Test: blockdev nvme admin passthru ...passed 00:06:57.714 Test: blockdev copy ...passed 00:06:57.714 Suite: bdevio tests on: Nvme1n1p1 00:06:57.714 Test: blockdev write read block ...passed 00:06:57.714 Test: blockdev write zeroes read block ...passed 00:06:57.714 Test: blockdev write zeroes read no split ...passed 00:06:57.714 Test: blockdev write zeroes read split ...passed 00:06:57.714 Test: blockdev write zeroes read split partial ...passed 00:06:57.714 Test: blockdev reset ...[2024-11-25 23:08:30.061454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:57.714 passed 00:06:57.714 Test: blockdev write read 8 blocks ...[2024-11-25 23:08:30.065094] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:57.714 passed 00:06:57.714 Test: blockdev write read size > 128k ...passed 00:06:57.714 Test: blockdev write read invalid size ...passed 00:06:57.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.714 Test: blockdev write read max offset ...passed 00:06:57.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.714 Test: blockdev writev readv 8 blocks ...passed 00:06:57.714 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.714 Test: blockdev writev readv block ...passed 00:06:57.975 Test: blockdev writev readv size > 128k ...passed 00:06:57.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.975 Test: blockdev comparev and writev ...[2024-11-25 23:08:30.081931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2afa0e000 len:0x1000 00:06:57.975 [2024-11-25 23:08:30.081961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.975 passed 00:06:57.975 Test: blockdev nvme passthru rw ...passed 00:06:57.975 Test: blockdev nvme passthru vendor specific ...passed 00:06:57.975 Test: blockdev nvme admin passthru ...passed 00:06:57.975 Test: blockdev copy ...passed 00:06:57.975 Suite: bdevio tests on: Nvme0n1 00:06:57.975 Test: blockdev write read block ...passed 00:06:57.975 Test: blockdev write zeroes read block ...passed 00:06:57.975 Test: blockdev write zeroes read no split ...passed 00:06:57.975 Test: blockdev write zeroes read split ...passed 00:06:57.975 Test: blockdev write zeroes read split partial ...passed 00:06:57.975 Test: blockdev reset ...[2024-11-25 23:08:30.134031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:57.975 passed 00:06:57.975 Test: blockdev write read 8 blocks ...[2024-11-25 23:08:30.137480] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:57.975 passed 00:06:57.975 Test: blockdev write read size > 128k ...passed 00:06:57.975 Test: blockdev write read invalid size ...passed 00:06:57.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.975 Test: blockdev write read max offset ...passed 00:06:57.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.975 Test: blockdev writev readv 8 blocks ...passed 00:06:57.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.975 Test: blockdev writev readv block ...passed 00:06:57.975 Test: blockdev writev readv size > 128k ...passed 00:06:57.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.975 Test: blockdev comparev and writev ...passed 00:06:57.975 Test: blockdev nvme passthru rw ...[2024-11-25 23:08:30.151644] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:57.975 separate metadata which is not supported yet. 00:06:57.975 passed 00:06:57.975 Test: blockdev nvme passthru vendor specific ...[2024-11-25 23:08:30.152663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:57.975 [2024-11-25 23:08:30.152781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:57.975 passed 00:06:57.975 Test: blockdev nvme admin passthru ...passed 00:06:57.975 Test: blockdev copy ...passed 00:06:57.975 00:06:57.975 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.975 suites 7 7 n/a 0 0 00:06:57.976 tests 161 161 161 0 0 00:06:57.976 asserts 1025 1025 1025 0 n/a 00:06:57.976 00:06:57.976 Elapsed time = 1.348 seconds 00:06:57.976 0 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61386 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61386 ']' 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61386 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61386 00:06:57.976 killing process with pid 61386 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61386' 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61386 00:06:57.976 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61386 00:06:58.548 23:08:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:58.548 00:06:58.548 real 0m2.230s 00:06:58.548 user 0m5.644s 00:06:58.548 sys 0m0.256s 00:06:58.548 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.548 ************************************ 00:06:58.548 END TEST bdev_bounds 00:06:58.548 ************************************ 00:06:58.548 23:08:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:58.810 23:08:30 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:58.810 23:08:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.810 23:08:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.810 23:08:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.810 ************************************ 00:06:58.810 START TEST bdev_nbd 00:06:58.810 ************************************ 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61447 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61447 /var/tmp/spdk-nbd.sock 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61447 ']' 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.810 23:08:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:58.810 [2024-11-25 23:08:30.996708] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:06:58.811 [2024-11-25 23:08:30.996821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.811 [2024-11-25 23:08:31.151168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.071 [2024-11-25 23:08:31.249335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.639 23:08:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.639 23:08:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:59.639 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:59.640 23:08:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.901 1+0 records in 00:06:59.901 1+0 records out 00:06:59.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546441 s, 7.5 MB/s 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:59.901 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.163 1+0 records in 00:07:00.163 1+0 records out 00:07:00.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000976991 s, 4.2 MB/s 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.163 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.164 1+0 records in 00:07:00.164 1+0 records out 00:07:00.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097857 s, 4.2 MB/s 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.164 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.426 1+0 records in 00:07:00.426 1+0 records out 00:07:00.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720856 s, 5.7 MB/s 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.426 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:00.686 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:00.686 23:08:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.686 1+0 records in 00:07:00.686 1+0 records out 00:07:00.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134339 s, 3.0 MB/s 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.686 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.947 1+0 records in 00:07:00.947 1+0 records out 00:07:00.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000931782 s, 4.4 MB/s 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.947 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.208 1+0 records in 00:07:01.208 1+0 records out 00:07:01.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750735 s, 5.5 MB/s 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.208 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd0", 00:07:01.469 "bdev_name": "Nvme0n1" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd1", 00:07:01.469 "bdev_name": "Nvme1n1p1" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd2", 00:07:01.469 "bdev_name": "Nvme1n1p2" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd3", 00:07:01.469 "bdev_name": "Nvme2n1" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd4", 00:07:01.469 "bdev_name": "Nvme2n2" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd5", 00:07:01.469 "bdev_name": "Nvme2n3" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd6", 00:07:01.469 "bdev_name": "Nvme3n1" 00:07:01.469 } 00:07:01.469 ]' 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd0", 00:07:01.469 "bdev_name": "Nvme0n1" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd1", 00:07:01.469 "bdev_name": "Nvme1n1p1" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd2", 00:07:01.469 "bdev_name": "Nvme1n1p2" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd3", 00:07:01.469 "bdev_name": "Nvme2n1" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd4", 00:07:01.469 "bdev_name": "Nvme2n2" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd5", 00:07:01.469 "bdev_name": "Nvme2n3" 00:07:01.469 }, 00:07:01.469 { 00:07:01.469 "nbd_device": "/dev/nbd6", 00:07:01.469 "bdev_name": "Nvme3n1" 00:07:01.469 } 00:07:01.469 ]' 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.469 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.731 23:08:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:02.041 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.338 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.599 23:08:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.861 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.121 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.383 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:03.652 /dev/nbd0 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.652 1+0 records in 00:07:03.652 1+0 records out 00:07:03.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012392 s, 3.3 MB/s 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.652 23:08:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:03.915 /dev/nbd1 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.915 1+0 records in 00:07:03.915 1+0 records out 00:07:03.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874639 s, 4.7 MB/s 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.915 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:03.915 /dev/nbd10 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.177 1+0 records in 00:07:04.177 1+0 records out 00:07:04.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148208 s, 2.8 MB/s 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.177 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:04.177 /dev/nbd11 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.439 1+0 records in 00:07:04.439 1+0 records out 00:07:04.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103096 s, 4.0 MB/s 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.439 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:04.440 /dev/nbd12 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.440 1+0 records in 00:07:04.440 1+0 records out 00:07:04.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116168 s, 3.5 MB/s 00:07:04.440 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.700 23:08:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:04.700 /dev/nbd13 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.700 1+0 records in 00:07:04.700 1+0 records out 00:07:04.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732131 s, 5.6 MB/s 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.700 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:04.958 /dev/nbd14 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.958 1+0 records in 00:07:04.958 1+0 records out 00:07:04.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013352 s, 3.1 MB/s 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.958 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.222 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.222 { 00:07:05.222 "nbd_device": "/dev/nbd0", 00:07:05.222 "bdev_name": "Nvme0n1" 00:07:05.222 }, 00:07:05.222 { 00:07:05.222 "nbd_device": "/dev/nbd1", 00:07:05.222 "bdev_name": "Nvme1n1p1" 00:07:05.222 }, 00:07:05.222 { 00:07:05.222 "nbd_device": "/dev/nbd10", 00:07:05.222 "bdev_name": "Nvme1n1p2" 00:07:05.222 }, 00:07:05.222 { 00:07:05.222 "nbd_device": "/dev/nbd11", 00:07:05.222 "bdev_name": "Nvme2n1" 00:07:05.222 }, 00:07:05.222 { 00:07:05.222 "nbd_device": "/dev/nbd12", 00:07:05.222 "bdev_name": "Nvme2n2" 00:07:05.222 }, 00:07:05.222 { 00:07:05.222 "nbd_device": "/dev/nbd13", 00:07:05.223 "bdev_name": "Nvme2n3" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd14", 00:07:05.223 "bdev_name": "Nvme3n1" 00:07:05.223 } 00:07:05.223 ]' 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd0", 00:07:05.223 "bdev_name": "Nvme0n1" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd1", 00:07:05.223 "bdev_name": "Nvme1n1p1" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd10", 00:07:05.223 "bdev_name": "Nvme1n1p2" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd11", 00:07:05.223 "bdev_name": "Nvme2n1" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd12", 00:07:05.223 "bdev_name": "Nvme2n2" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd13", 00:07:05.223 "bdev_name": "Nvme2n3" 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "nbd_device": "/dev/nbd14", 00:07:05.223 "bdev_name": "Nvme3n1" 00:07:05.223 } 00:07:05.223 ]' 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.223 /dev/nbd1 00:07:05.223 /dev/nbd10 00:07:05.223 /dev/nbd11 00:07:05.223 /dev/nbd12 00:07:05.223 /dev/nbd13 00:07:05.223 /dev/nbd14' 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.223 /dev/nbd1 00:07:05.223 /dev/nbd10 00:07:05.223 /dev/nbd11 00:07:05.223 /dev/nbd12 00:07:05.223 /dev/nbd13 00:07:05.223 /dev/nbd14' 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:05.223 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.224 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:05.224 256+0 records in 00:07:05.224 256+0 records out 00:07:05.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00943621 s, 111 MB/s 00:07:05.224 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.224 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.490 256+0 records in 00:07:05.490 256+0 records out 00:07:05.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237092 s, 4.4 MB/s 00:07:05.490 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.490 23:08:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.750 256+0 records in 00:07:05.750 256+0 records out 00:07:05.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232743 s, 4.5 MB/s 00:07:05.750 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.750 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:06.016 256+0 records in 00:07:06.016 256+0 records out 00:07:06.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225571 s, 4.6 MB/s 00:07:06.016 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.016 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.279 256+0 records in 00:07:06.279 256+0 records out 00:07:06.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238535 s, 4.4 MB/s 00:07:06.279 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.279 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:06.540 256+0 records in 00:07:06.540 256+0 records out 00:07:06.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236854 s, 4.4 MB/s 00:07:06.540 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.540 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:06.801 256+0 records in 00:07:06.801 256+0 records out 00:07:06.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239178 s, 4.4 MB/s 00:07:06.801 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.801 23:08:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:07.060 256+0 records in 00:07:07.060 256+0 records out 00:07:07.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248317 s, 4.2 MB/s 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.060 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.061 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.321 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.584 23:08:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:07.845 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:07.845 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:07.845 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:07.845 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.845 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.846 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:07.846 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.846 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.846 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.846 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.108 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.372 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.632 23:08:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:08.894 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:09.154 malloc_lvol_verify 00:07:09.155 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:09.155 811607df-bf78-48b7-8553-b8caa5577967 00:07:09.414 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:09.414 cb01e8d7-b7b0-4253-9217-f29eee145c60 00:07:09.414 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:09.673 /dev/nbd0 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:09.673 mke2fs 1.47.0 (5-Feb-2023) 00:07:09.673 Discarding device blocks: 0/4096 done 00:07:09.673 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:09.673 00:07:09.673 Allocating group tables: 0/1 done 00:07:09.673 Writing inode tables: 0/1 done 00:07:09.673 Creating journal (1024 blocks): done 00:07:09.673 Writing superblocks and filesystem accounting information: 0/1 done 00:07:09.673 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.673 23:08:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61447 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61447 ']' 00:07:09.932 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61447 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61447 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.933 killing process with pid 61447 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61447' 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61447 00:07:09.933 23:08:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61447 00:07:10.873 ************************************ 00:07:10.873 END TEST bdev_nbd 00:07:10.873 ************************************ 00:07:10.873 23:08:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:10.873 00:07:10.873 real 0m12.265s 00:07:10.873 user 0m16.464s 00:07:10.873 sys 0m4.116s 00:07:10.873 23:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.873 23:08:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:11.133 23:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:11.133 23:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:11.133 skipping fio tests on NVMe due to multi-ns failures. 00:07:11.133 23:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:11.133 23:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:11.133 23:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:11.133 23:08:43 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:11.133 23:08:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:11.133 23:08:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.133 23:08:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.133 ************************************ 00:07:11.133 START TEST bdev_verify 00:07:11.133 ************************************ 00:07:11.133 23:08:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:11.133 [2024-11-25 23:08:43.311717] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:11.133 [2024-11-25 23:08:43.311814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61872 ] 00:07:11.133 [2024-11-25 23:08:43.460867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.391 [2024-11-25 23:08:43.546641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.391 [2024-11-25 23:08:43.546796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.964 Running I/O for 5 seconds... 00:07:14.368 19392.00 IOPS, 75.75 MiB/s [2024-11-25T23:08:47.682Z] 19456.00 IOPS, 76.00 MiB/s [2024-11-25T23:08:48.618Z] 19136.00 IOPS, 74.75 MiB/s [2024-11-25T23:08:49.557Z] 19024.00 IOPS, 74.31 MiB/s [2024-11-25T23:08:49.557Z] 19110.40 IOPS, 74.65 MiB/s 00:07:17.188 Latency(us) 00:07:17.188 [2024-11-25T23:08:49.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.188 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0xbd0bd 00:07:17.188 Nvme0n1 : 5.06 1341.10 5.24 0.00 0.00 95176.40 21475.64 78239.90 00:07:17.188 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:17.188 Nvme0n1 : 5.10 1356.04 5.30 0.00 0.00 93119.60 12351.02 82676.18 00:07:17.188 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0x4ff80 00:07:17.188 Nvme1n1p1 : 5.06 1340.61 5.24 0.00 0.00 95074.36 22887.19 74610.22 00:07:17.188 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:17.188 Nvme1n1p1 : 5.10 1355.63 5.30 0.00 0.00 93042.10 10838.65 83482.78 00:07:17.188 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0x4ff7f 00:07:17.188 Nvme1n1p2 : 5.06 1340.10 5.23 0.00 0.00 94979.55 25206.15 72190.42 00:07:17.188 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:17.188 Nvme1n1p2 : 5.05 1344.57 5.25 0.00 0.00 94835.78 21072.34 95178.44 00:07:17.188 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0x80000 00:07:17.188 Nvme2n1 : 5.06 1339.67 5.23 0.00 0.00 94848.82 26214.40 71787.13 00:07:17.188 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x80000 length 0x80000 00:07:17.188 Nvme2n1 : 5.08 1348.44 5.27 0.00 0.00 94059.97 11544.42 81466.29 00:07:17.188 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0x80000 00:07:17.188 Nvme2n2 : 5.07 1339.22 5.23 0.00 0.00 94691.70 24399.56 74610.22 00:07:17.188 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x80000 length 0x80000 00:07:17.188 Nvme2n2 : 5.09 1357.21 5.30 0.00 0.00 93464.61 11645.24 78643.20 00:07:17.188 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0x80000 00:07:17.188 Nvme2n3 : 5.08 1348.15 5.27 0.00 0.00 93959.96 6906.49 77030.01 00:07:17.188 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x80000 length 0x80000 00:07:17.188 Nvme2n3 : 5.09 1356.81 5.30 0.00 0.00 93312.63 11796.48 77836.60 00:07:17.188 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x0 length 0x20000 00:07:17.188 Nvme3n1 : 5.08 1347.70 5.26 0.00 0.00 93820.58 5999.06 78643.20 00:07:17.188 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:17.188 Verification LBA range: start 0x20000 length 0x20000 00:07:17.188 Nvme3n1 : 5.10 1356.42 5.30 0.00 0.00 93189.96 12098.95 79853.10 00:07:17.188 [2024-11-25T23:08:49.557Z] =================================================================================================================== 00:07:17.188 [2024-11-25T23:08:49.557Z] Total : 18871.68 73.72 0.00 0.00 94106.32 5999.06 95178.44 00:07:18.126 00:07:18.126 real 0m7.197s 00:07:18.126 user 0m13.389s 00:07:18.126 sys 0m0.233s 00:07:18.126 23:08:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.126 ************************************ 00:07:18.126 END TEST bdev_verify 00:07:18.126 ************************************ 00:07:18.126 23:08:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:18.387 23:08:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:18.387 23:08:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:18.387 23:08:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.387 23:08:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.387 ************************************ 00:07:18.387 START TEST bdev_verify_big_io 00:07:18.387 ************************************ 00:07:18.387 23:08:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:18.387 [2024-11-25 23:08:50.602709] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:18.387 [2024-11-25 23:08:50.602863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61971 ] 00:07:18.647 [2024-11-25 23:08:50.769823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.647 [2024-11-25 23:08:50.939438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.647 [2024-11-25 23:08:50.939558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.588 Running I/O for 5 seconds... 00:07:25.431 1148.00 IOPS, 71.75 MiB/s [2024-11-25T23:08:58.370Z] 2461.00 IOPS, 153.81 MiB/s [2024-11-25T23:08:58.371Z] 3508.00 IOPS, 219.25 MiB/s 00:07:26.002 Latency(us) 00:07:26.002 [2024-11-25T23:08:58.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0xbd0b 00:07:26.002 Nvme0n1 : 5.59 137.41 8.59 0.00 0.00 901062.63 19358.33 1129235.69 00:07:26.002 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:26.002 Nvme0n1 : 6.02 67.73 4.23 0.00 0.00 1787161.28 22685.54 1819682.66 00:07:26.002 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0x4ff8 00:07:26.002 Nvme1n1p1 : 5.78 124.67 7.79 0.00 0.00 959803.91 98404.82 1626099.40 00:07:26.002 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:26.002 Nvme1n1p1 : 6.06 73.90 4.62 0.00 0.00 1587626.70 35893.56 1768060.46 00:07:26.002 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0x4ff7 00:07:26.002 Nvme1n1p2 : 5.78 121.70 7.61 0.00 0.00 956370.74 108890.58 1477685.56 00:07:26.002 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:26.002 Nvme1n1p2 : 6.13 78.76 4.92 0.00 0.00 1412660.78 27222.65 1793871.56 00:07:26.002 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0x8000 00:07:26.002 Nvme2n1 : 5.79 143.26 8.95 0.00 0.00 798374.30 78643.20 819502.47 00:07:26.002 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x8000 length 0x8000 00:07:26.002 Nvme2n1 : 6.13 83.46 5.22 0.00 0.00 1267733.86 40733.14 1477685.56 00:07:26.002 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0x8000 00:07:26.002 Nvme2n2 : 5.87 152.58 9.54 0.00 0.00 733731.33 41539.74 845313.58 00:07:26.002 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x8000 length 0x8000 00:07:26.002 Nvme2n2 : 6.24 98.26 6.14 0.00 0.00 1044431.58 19862.45 1509949.44 00:07:26.002 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0x8000 00:07:26.002 Nvme2n3 : 5.91 156.56 9.78 0.00 0.00 692571.80 39926.55 858219.13 00:07:26.002 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x8000 length 0x8000 00:07:26.002 Nvme2n3 : 6.38 136.58 8.54 0.00 0.00 722762.99 8670.92 1535760.54 00:07:26.002 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x0 length 0x2000 00:07:26.002 Nvme3n1 : 6.02 180.85 11.30 0.00 0.00 585471.01 422.20 877577.45 00:07:26.002 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.002 Verification LBA range: start 0x2000 length 0x2000 00:07:26.002 Nvme3n1 : 6.58 233.46 14.59 0.00 0.00 407115.54 248.91 1574477.19 00:07:26.002 [2024-11-25T23:08:58.371Z] =================================================================================================================== 00:07:26.002 [2024-11-25T23:08:58.371Z] Total : 1789.17 111.82 0.00 0.00 862878.38 248.91 1819682.66 00:07:27.388 00:07:27.388 real 0m9.209s 00:07:27.388 user 0m17.228s 00:07:27.388 sys 0m0.395s 00:07:27.388 23:08:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.388 ************************************ 00:07:27.388 END TEST bdev_verify_big_io 00:07:27.388 ************************************ 00:07:27.388 23:08:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:27.649 23:08:59 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.649 23:08:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:27.649 23:08:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.649 23:08:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.649 ************************************ 00:07:27.649 START TEST bdev_write_zeroes 00:07:27.649 ************************************ 00:07:27.649 23:08:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.649 [2024-11-25 23:08:59.863263] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:27.649 [2024-11-25 23:08:59.863383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62091 ] 00:07:27.908 [2024-11-25 23:09:00.020933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.908 [2024-11-25 23:09:00.125945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.472 Running I/O for 1 seconds... 00:07:29.405 69440.00 IOPS, 271.25 MiB/s 00:07:29.405 Latency(us) 00:07:29.405 [2024-11-25T23:09:01.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.405 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme0n1 : 1.02 9880.46 38.60 0.00 0.00 12926.79 9578.34 23996.26 00:07:29.405 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme1n1p1 : 1.02 9868.09 38.55 0.00 0.00 12926.05 9326.28 23794.61 00:07:29.405 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme1n1p2 : 1.03 9855.96 38.50 0.00 0.00 12907.12 9578.34 23290.49 00:07:29.405 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme2n1 : 1.03 9844.73 38.46 0.00 0.00 12870.85 7864.32 22584.71 00:07:29.405 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme2n2 : 1.03 9833.64 38.41 0.00 0.00 12850.41 6906.49 22786.36 00:07:29.405 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme2n3 : 1.03 9822.45 38.37 0.00 0.00 12844.27 6704.84 22887.19 00:07:29.405 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.405 Nvme3n1 : 1.03 9811.42 38.33 0.00 0.00 12841.42 6755.25 24097.08 00:07:29.405 [2024-11-25T23:09:01.774Z] =================================================================================================================== 00:07:29.405 [2024-11-25T23:09:01.774Z] Total : 68916.75 269.21 0.00 0.00 12880.99 6704.84 24097.08 00:07:30.339 00:07:30.339 real 0m2.690s 00:07:30.339 user 0m2.375s 00:07:30.339 sys 0m0.201s 00:07:30.339 ************************************ 00:07:30.339 END TEST bdev_write_zeroes 00:07:30.339 ************************************ 00:07:30.339 23:09:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.339 23:09:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:30.339 23:09:02 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.339 23:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:30.339 23:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.339 23:09:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.339 ************************************ 00:07:30.339 START TEST bdev_json_nonenclosed 00:07:30.339 ************************************ 00:07:30.339 23:09:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.339 [2024-11-25 23:09:02.619547] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:30.340 [2024-11-25 23:09:02.619664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62139 ] 00:07:30.600 [2024-11-25 23:09:02.780876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.600 [2024-11-25 23:09:02.937465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.600 [2024-11-25 23:09:02.937605] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:30.600 [2024-11-25 23:09:02.937628] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.600 [2024-11-25 23:09:02.937641] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.862 00:07:30.862 real 0m0.612s 00:07:30.862 user 0m0.394s 00:07:30.862 sys 0m0.112s 00:07:30.862 23:09:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.862 23:09:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:30.862 ************************************ 00:07:30.862 END TEST bdev_json_nonenclosed 00:07:30.862 ************************************ 00:07:30.862 23:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.862 23:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:30.862 23:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.862 23:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.123 ************************************ 00:07:31.123 START TEST bdev_json_nonarray 00:07:31.123 ************************************ 00:07:31.123 23:09:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.123 [2024-11-25 23:09:03.319888] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:31.123 [2024-11-25 23:09:03.320079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62164 ] 00:07:31.123 [2024-11-25 23:09:03.483095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.383 [2024-11-25 23:09:03.649075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.383 [2024-11-25 23:09:03.649217] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:31.383 [2024-11-25 23:09:03.649240] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:31.383 [2024-11-25 23:09:03.649253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.644 00:07:31.644 real 0m0.643s 00:07:31.644 user 0m0.395s 00:07:31.644 sys 0m0.139s 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.644 ************************************ 00:07:31.644 END TEST bdev_json_nonarray 00:07:31.644 ************************************ 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:31.644 23:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:31.644 23:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:31.644 23:09:03 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:31.644 23:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.644 23:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.644 23:09:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.644 ************************************ 00:07:31.644 START TEST bdev_gpt_uuid 00:07:31.644 ************************************ 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62195 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62195 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62195 ']' 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.644 23:09:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:31.931 [2024-11-25 23:09:04.045183] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:31.931 [2024-11-25 23:09:04.045334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:07:31.931 [2024-11-25 23:09:04.203403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.240 [2024-11-25 23:09:04.354929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.812 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.812 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:32.812 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:32.812 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.812 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.386 Some configs were skipped because the RPC state that can call them passed over. 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:33.386 { 00:07:33.386 "name": "Nvme1n1p1", 00:07:33.386 "aliases": [ 00:07:33.386 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:33.386 ], 00:07:33.386 "product_name": "GPT Disk", 00:07:33.386 "block_size": 4096, 00:07:33.386 "num_blocks": 655104, 00:07:33.386 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:33.386 "assigned_rate_limits": { 00:07:33.386 "rw_ios_per_sec": 0, 00:07:33.386 "rw_mbytes_per_sec": 0, 00:07:33.386 "r_mbytes_per_sec": 0, 00:07:33.386 "w_mbytes_per_sec": 0 00:07:33.386 }, 00:07:33.386 "claimed": false, 00:07:33.386 "zoned": false, 00:07:33.386 "supported_io_types": { 00:07:33.386 "read": true, 00:07:33.386 "write": true, 00:07:33.386 "unmap": true, 00:07:33.386 "flush": true, 00:07:33.386 "reset": true, 00:07:33.386 "nvme_admin": false, 00:07:33.386 "nvme_io": false, 00:07:33.386 "nvme_io_md": false, 00:07:33.386 "write_zeroes": true, 00:07:33.386 "zcopy": false, 00:07:33.386 "get_zone_info": false, 00:07:33.386 "zone_management": false, 00:07:33.386 "zone_append": false, 00:07:33.386 "compare": true, 00:07:33.386 "compare_and_write": false, 00:07:33.386 "abort": true, 00:07:33.386 "seek_hole": false, 00:07:33.386 "seek_data": false, 00:07:33.386 "copy": true, 00:07:33.386 "nvme_iov_md": false 00:07:33.386 }, 00:07:33.386 "driver_specific": { 00:07:33.386 "gpt": { 00:07:33.386 "base_bdev": "Nvme1n1", 00:07:33.386 "offset_blocks": 256, 00:07:33.386 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:33.386 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:33.386 "partition_name": "SPDK_TEST_first" 00:07:33.386 } 00:07:33.386 } 00:07:33.386 } 00:07:33.386 ]' 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:33.386 { 00:07:33.386 "name": "Nvme1n1p2", 00:07:33.386 "aliases": [ 00:07:33.386 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:33.386 ], 00:07:33.386 "product_name": "GPT Disk", 00:07:33.386 "block_size": 4096, 00:07:33.386 "num_blocks": 655103, 00:07:33.386 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:33.386 "assigned_rate_limits": { 00:07:33.386 "rw_ios_per_sec": 0, 00:07:33.386 "rw_mbytes_per_sec": 0, 00:07:33.386 "r_mbytes_per_sec": 0, 00:07:33.386 "w_mbytes_per_sec": 0 00:07:33.386 }, 00:07:33.386 "claimed": false, 00:07:33.386 "zoned": false, 00:07:33.386 "supported_io_types": { 00:07:33.386 "read": true, 00:07:33.386 "write": true, 00:07:33.386 "unmap": true, 00:07:33.386 "flush": true, 00:07:33.386 "reset": true, 00:07:33.386 "nvme_admin": false, 00:07:33.386 "nvme_io": false, 00:07:33.386 "nvme_io_md": false, 00:07:33.386 "write_zeroes": true, 00:07:33.386 "zcopy": false, 00:07:33.386 "get_zone_info": false, 00:07:33.386 "zone_management": false, 00:07:33.386 "zone_append": false, 00:07:33.386 "compare": true, 00:07:33.386 "compare_and_write": false, 00:07:33.386 "abort": true, 00:07:33.386 "seek_hole": false, 00:07:33.386 "seek_data": false, 00:07:33.386 "copy": true, 00:07:33.386 "nvme_iov_md": false 00:07:33.386 }, 00:07:33.386 "driver_specific": { 00:07:33.386 "gpt": { 00:07:33.386 "base_bdev": "Nvme1n1", 00:07:33.386 "offset_blocks": 655360, 00:07:33.386 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:33.386 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:33.386 "partition_name": "SPDK_TEST_second" 00:07:33.386 } 00:07:33.386 } 00:07:33.386 } 00:07:33.386 ]' 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:33.386 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62195 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62195 ']' 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62195 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62195 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.646 killing process with pid 62195 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62195' 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62195 00:07:33.646 23:09:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62195 00:07:35.034 00:07:35.034 real 0m3.321s 00:07:35.034 user 0m3.295s 00:07:35.034 sys 0m0.585s 00:07:35.034 23:09:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.034 ************************************ 00:07:35.034 END TEST bdev_gpt_uuid 00:07:35.034 23:09:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.034 ************************************ 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:35.034 23:09:07 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:35.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.554 Waiting for block devices as requested 00:07:35.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.815 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.815 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.102 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:41.102 23:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:41.102 23:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:41.364 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:41.364 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:41.364 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:41.364 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:41.364 23:09:13 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:41.364 00:07:41.364 real 0m58.491s 00:07:41.364 user 1m13.440s 00:07:41.364 sys 0m8.850s 00:07:41.364 23:09:13 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.364 ************************************ 00:07:41.364 END TEST blockdev_nvme_gpt 00:07:41.364 ************************************ 00:07:41.364 23:09:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.364 23:09:13 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:41.364 23:09:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.364 23:09:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.364 23:09:13 -- common/autotest_common.sh@10 -- # set +x 00:07:41.364 ************************************ 00:07:41.364 START TEST nvme 00:07:41.364 ************************************ 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:41.364 * Looking for test storage... 00:07:41.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.364 23:09:13 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.364 23:09:13 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.364 23:09:13 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.364 23:09:13 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.364 23:09:13 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.364 23:09:13 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:41.364 23:09:13 nvme -- scripts/common.sh@345 -- # : 1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.364 23:09:13 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.364 23:09:13 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@353 -- # local d=1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.364 23:09:13 nvme -- scripts/common.sh@355 -- # echo 1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.364 23:09:13 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@353 -- # local d=2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.364 23:09:13 nvme -- scripts/common.sh@355 -- # echo 2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.364 23:09:13 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.364 23:09:13 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.364 23:09:13 nvme -- scripts/common.sh@368 -- # return 0 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.364 --rc genhtml_branch_coverage=1 00:07:41.364 --rc genhtml_function_coverage=1 00:07:41.364 --rc genhtml_legend=1 00:07:41.364 --rc geninfo_all_blocks=1 00:07:41.364 --rc geninfo_unexecuted_blocks=1 00:07:41.364 00:07:41.364 ' 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.364 --rc genhtml_branch_coverage=1 00:07:41.364 --rc genhtml_function_coverage=1 00:07:41.364 --rc genhtml_legend=1 00:07:41.364 --rc geninfo_all_blocks=1 00:07:41.364 --rc geninfo_unexecuted_blocks=1 00:07:41.364 00:07:41.364 ' 00:07:41.364 23:09:13 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.364 --rc genhtml_branch_coverage=1 00:07:41.364 --rc genhtml_function_coverage=1 00:07:41.364 --rc genhtml_legend=1 00:07:41.364 --rc geninfo_all_blocks=1 00:07:41.364 --rc geninfo_unexecuted_blocks=1 00:07:41.364 00:07:41.365 ' 00:07:41.365 23:09:13 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.365 --rc genhtml_branch_coverage=1 00:07:41.365 --rc genhtml_function_coverage=1 00:07:41.365 --rc genhtml_legend=1 00:07:41.365 --rc geninfo_all_blocks=1 00:07:41.365 --rc geninfo_unexecuted_blocks=1 00:07:41.365 00:07:41.365 ' 00:07:41.365 23:09:13 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:41.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.585 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.585 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.585 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.585 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.585 23:09:14 nvme -- nvme/nvme.sh@79 -- # uname 00:07:42.585 23:09:14 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:42.585 23:09:14 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:42.585 23:09:14 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1075 -- # stubpid=62835 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:42.585 Waiting for stub to ready for secondary processes... 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62835 ]] 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:42.585 23:09:14 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:42.585 [2024-11-25 23:09:14.874111] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:07:42.585 [2024-11-25 23:09:14.874229] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:43.532 [2024-11-25 23:09:15.747322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.532 [2024-11-25 23:09:15.834704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.532 [2024-11-25 23:09:15.834960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.532 [2024-11-25 23:09:15.834985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.532 23:09:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:43.532 23:09:15 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62835 ]] 00:07:43.532 23:09:15 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:43.532 [2024-11-25 23:09:15.857472] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:43.532 [2024-11-25 23:09:15.857511] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.532 [2024-11-25 23:09:15.868097] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:43.532 [2024-11-25 23:09:15.868230] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:43.532 [2024-11-25 23:09:15.871507] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.532 [2024-11-25 23:09:15.872123] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:43.532 [2024-11-25 23:09:15.872171] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:43.532 [2024-11-25 23:09:15.873603] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.532 [2024-11-25 23:09:15.873839] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:43.532 [2024-11-25 23:09:15.873876] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:43.532 [2024-11-25 23:09:15.875897] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.532 [2024-11-25 23:09:15.876008] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:43.533 [2024-11-25 23:09:15.876044] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:43.533 [2024-11-25 23:09:15.876710] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:43.533 [2024-11-25 23:09:15.876781] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:44.918 23:09:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:44.918 done. 00:07:44.918 23:09:16 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:44.918 23:09:16 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:44.918 23:09:16 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:44.918 23:09:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.918 23:09:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.918 ************************************ 00:07:44.918 START TEST nvme_reset 00:07:44.918 ************************************ 00:07:44.918 23:09:16 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:44.918 Initializing NVMe Controllers 00:07:44.918 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:44.918 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:44.918 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:44.918 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:44.918 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:44.918 00:07:44.918 real 0m0.202s 00:07:44.918 user 0m0.055s 00:07:44.918 sys 0m0.105s 00:07:44.918 23:09:17 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.918 23:09:17 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:44.918 ************************************ 00:07:44.918 END TEST nvme_reset 00:07:44.918 ************************************ 00:07:44.918 23:09:17 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:44.918 23:09:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.918 23:09:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.918 23:09:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.918 ************************************ 00:07:44.918 START TEST nvme_identify 00:07:44.918 ************************************ 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:44.918 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:44.918 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:44.918 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:44.918 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:44.918 23:09:17 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:44.918 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:45.182 [2024-11-25 23:09:17.359453] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62869 terminated unexpected 00:07:45.182 ===================================================== 00:07:45.182 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.182 ===================================================== 00:07:45.182 Controller Capabilities/Features 00:07:45.182 ================================ 00:07:45.182 Vendor ID: 1b36 00:07:45.182 Subsystem Vendor ID: 1af4 00:07:45.182 Serial Number: 12343 00:07:45.182 Model Number: QEMU NVMe Ctrl 00:07:45.182 Firmware Version: 8.0.0 00:07:45.182 Recommended Arb Burst: 6 00:07:45.182 IEEE OUI Identifier: 00 54 52 00:07:45.182 Multi-path I/O 00:07:45.182 May have multiple subsystem ports: No 00:07:45.182 May have multiple controllers: Yes 00:07:45.182 Associated with SR-IOV VF: No 00:07:45.182 Max Data Transfer Size: 524288 00:07:45.182 Max Number of Namespaces: 256 00:07:45.182 Max Number of I/O Queues: 64 00:07:45.182 NVMe Specification Version (VS): 1.4 00:07:45.182 NVMe Specification Version (Identify): 1.4 00:07:45.182 Maximum Queue Entries: 2048 00:07:45.182 Contiguous Queues Required: Yes 00:07:45.182 Arbitration Mechanisms Supported 00:07:45.182 Weighted Round Robin: Not Supported 00:07:45.182 Vendor Specific: Not Supported 00:07:45.182 Reset Timeout: 7500 ms 00:07:45.182 Doorbell Stride: 4 bytes 00:07:45.182 NVM Subsystem Reset: Not Supported 00:07:45.182 Command Sets Supported 00:07:45.182 NVM Command Set: Supported 00:07:45.182 Boot Partition: Not Supported 00:07:45.182 Memory Page Size Minimum: 4096 bytes 00:07:45.182 Memory Page Size Maximum: 65536 bytes 00:07:45.182 Persistent Memory Region: Not Supported 00:07:45.182 Optional Asynchronous Events Supported 00:07:45.182 Namespace Attribute Notices: Supported 00:07:45.182 Firmware Activation Notices: Not Supported 00:07:45.182 ANA Change Notices: Not Supported 00:07:45.182 PLE Aggregate Log Change Notices: Not Supported 00:07:45.182 LBA Status Info Alert Notices: Not Supported 00:07:45.182 EGE Aggregate Log Change Notices: Not Supported 00:07:45.182 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.182 Zone Descriptor Change Notices: Not Supported 00:07:45.182 Discovery Log Change Notices: Not Supported 00:07:45.182 Controller Attributes 00:07:45.182 128-bit Host Identifier: Not Supported 00:07:45.182 Non-Operational Permissive Mode: Not Supported 00:07:45.182 NVM Sets: Not Supported 00:07:45.182 Read Recovery Levels: Not Supported 00:07:45.182 Endurance Groups: Supported 00:07:45.182 Predictable Latency Mode: Not Supported 00:07:45.182 Traffic Based Keep ALive: Not Supported 00:07:45.182 Namespace Granularity: Not Supported 00:07:45.182 SQ Associations: Not Supported 00:07:45.182 UUID List: Not Supported 00:07:45.182 Multi-Domain Subsystem: Not Supported 00:07:45.182 Fixed Capacity Management: Not Supported 00:07:45.182 Variable Capacity Management: Not Supported 00:07:45.182 Delete Endurance Group: Not Supported 00:07:45.182 Delete NVM Set: Not Supported 00:07:45.182 Extended LBA Formats Supported: Supported 00:07:45.182 Flexible Data Placement Supported: Supported 00:07:45.182 00:07:45.182 Controller Memory Buffer Support 00:07:45.182 ================================ 00:07:45.182 Supported: No 00:07:45.182 00:07:45.182 Persistent Memory Region Support 00:07:45.182 ================================ 00:07:45.182 Supported: No 00:07:45.182 00:07:45.182 Admin Command Set Attributes 00:07:45.182 ============================ 00:07:45.182 Security Send/Receive: Not Supported 00:07:45.182 Format NVM: Supported 00:07:45.182 Firmware Activate/Download: Not Supported 00:07:45.182 Namespace Management: Supported 00:07:45.182 Device Self-Test: Not Supported 00:07:45.182 Directives: Supported 00:07:45.182 NVMe-MI: Not Supported 00:07:45.182 Virtualization Management: Not Supported 00:07:45.182 Doorbell Buffer Config: Supported 00:07:45.182 Get LBA Status Capability: Not Supported 00:07:45.182 Command & Feature Lockdown Capability: Not Supported 00:07:45.182 Abort Command Limit: 4 00:07:45.182 Async Event Request Limit: 4 00:07:45.182 Number of Firmware Slots: N/A 00:07:45.182 Firmware Slot 1 Read-Only: N/A 00:07:45.182 Firmware Activation Without Reset: N/A 00:07:45.182 Multiple Update Detection Support: N/A 00:07:45.182 Firmware Update Granularity: No Information Provided 00:07:45.183 Per-Namespace SMART Log: Yes 00:07:45.183 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.183 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:45.183 Command Effects Log Page: Supported 00:07:45.183 Get Log Page Extended Data: Supported 00:07:45.183 Telemetry Log Pages: Not Supported 00:07:45.183 Persistent Event Log Pages: Not Supported 00:07:45.183 Supported Log Pages Log Page: May Support 00:07:45.183 Commands Supported & Effects Log Page: Not Supported 00:07:45.183 Feature Identifiers & Effects Log Page:May Support 00:07:45.183 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.183 Data Area 4 for Telemetry Log: Not Supported 00:07:45.183 Error Log Page Entries Supported: 1 00:07:45.183 Keep Alive: Not Supported 00:07:45.183 00:07:45.183 NVM Command Set Attributes 00:07:45.183 ========================== 00:07:45.183 Submission Queue Entry Size 00:07:45.183 Max: 64 00:07:45.183 Min: 64 00:07:45.183 Completion Queue Entry Size 00:07:45.183 Max: 16 00:07:45.183 Min: 16 00:07:45.183 Number of Namespaces: 256 00:07:45.183 Compare Command: Supported 00:07:45.183 Write Uncorrectable Command: Not Supported 00:07:45.183 Dataset Management Command: Supported 00:07:45.183 Write Zeroes Command: Supported 00:07:45.183 Set Features Save Field: Supported 00:07:45.183 Reservations: Not Supported 00:07:45.183 Timestamp: Supported 00:07:45.183 Copy: Supported 00:07:45.183 Volatile Write Cache: Present 00:07:45.183 Atomic Write Unit (Normal): 1 00:07:45.183 Atomic Write Unit (PFail): 1 00:07:45.183 Atomic Compare & Write Unit: 1 00:07:45.183 Fused Compare & Write: Not Supported 00:07:45.183 Scatter-Gather List 00:07:45.183 SGL Command Set: Supported 00:07:45.183 SGL Keyed: Not Supported 00:07:45.183 SGL Bit Bucket Descriptor: Not Supported 00:07:45.183 SGL Metadata Pointer: Not Supported 00:07:45.183 Oversized SGL: Not Supported 00:07:45.183 SGL Metadata Address: Not Supported 00:07:45.183 SGL Offset: Not Supported 00:07:45.183 Transport SGL Data Block: Not Supported 00:07:45.183 Replay Protected Memory Block: Not Supported 00:07:45.183 00:07:45.183 Firmware Slot Information 00:07:45.183 ========================= 00:07:45.183 Active slot: 1 00:07:45.183 Slot 1 Firmware Revision: 1.0 00:07:45.183 00:07:45.183 00:07:45.183 Commands Supported and Effects 00:07:45.183 ============================== 00:07:45.183 Admin Commands 00:07:45.183 -------------- 00:07:45.183 Delete I/O Submission Queue (00h): Supported 00:07:45.183 Create I/O Submission Queue (01h): Supported 00:07:45.183 Get Log Page (02h): Supported 00:07:45.183 Delete I/O Completion Queue (04h): Supported 00:07:45.183 Create I/O Completion Queue (05h): Supported 00:07:45.183 Identify (06h): Supported 00:07:45.183 Abort (08h): Supported 00:07:45.183 Set Features (09h): Supported 00:07:45.183 Get Features (0Ah): Supported 00:07:45.183 Asynchronous Event Request (0Ch): Supported 00:07:45.183 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.183 Directive Send (19h): Supported 00:07:45.183 Directive Receive (1Ah): Supported 00:07:45.183 Virtualization Management (1Ch): Supported 00:07:45.183 Doorbell Buffer Config (7Ch): Supported 00:07:45.183 Format NVM (80h): Supported LBA-Change 00:07:45.183 I/O Commands 00:07:45.183 ------------ 00:07:45.183 Flush (00h): Supported LBA-Change 00:07:45.183 Write (01h): Supported LBA-Change 00:07:45.183 Read (02h): Supported 00:07:45.183 Compare (05h): Supported 00:07:45.183 Write Zeroes (08h): Supported LBA-Change 00:07:45.183 Dataset Management (09h): Supported LBA-Change 00:07:45.183 Unknown (0Ch): Supported 00:07:45.183 Unknown (12h): Supported 00:07:45.183 Copy (19h): Supported LBA-Change 00:07:45.183 Unknown (1Dh): Supported LBA-Change 00:07:45.183 00:07:45.183 Error Log 00:07:45.183 ========= 00:07:45.183 00:07:45.183 Arbitration 00:07:45.183 =========== 00:07:45.183 Arbitration Burst: no limit 00:07:45.183 00:07:45.183 Power Management 00:07:45.183 ================ 00:07:45.183 Number of Power States: 1 00:07:45.183 Current Power State: Power State #0 00:07:45.183 Power State #0: 00:07:45.183 Max Power: 25.00 W 00:07:45.183 Non-Operational State: Operational 00:07:45.183 Entry Latency: 16 microseconds 00:07:45.183 Exit Latency: 4 microseconds 00:07:45.183 Relative Read Throughput: 0 00:07:45.183 Relative Read Latency: 0 00:07:45.183 Relative Write Throughput: 0 00:07:45.183 Relative Write Latency: 0 00:07:45.183 Idle Power: Not Reported 00:07:45.183 Active Power: Not Reported 00:07:45.183 Non-Operational Permissive Mode: Not Supported 00:07:45.183 00:07:45.183 Health Information 00:07:45.183 ================== 00:07:45.183 Critical Warnings: 00:07:45.183 Available Spare Space: OK 00:07:45.183 Temperature: OK 00:07:45.183 Device Reliability: OK 00:07:45.183 Read Only: No 00:07:45.183 Volatile Memory Backup: OK 00:07:45.183 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.183 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.183 Available Spare: 0% 00:07:45.183 Available Spare Threshold: 0% 00:07:45.183 Life Percentage Used: 0% 00:07:45.183 Data Units Read: 876 00:07:45.183 Data Units Written: 805 00:07:45.183 Host Read Commands: 36498 00:07:45.183 Host Write Commands: 35922 00:07:45.183 Controller Busy Time: 0 minutes 00:07:45.183 Power Cycles: 0 00:07:45.183 Power On Hours: 0 hours 00:07:45.183 Unsafe Shutdowns: 0 00:07:45.183 Unrecoverable Media Errors: 0 00:07:45.183 Lifetime Error Log Entries: 0 00:07:45.183 Warning Temperature Time: 0 minutes 00:07:45.183 Critical Temperature Time: 0 minutes 00:07:45.183 00:07:45.183 Number of Queues 00:07:45.183 ================ 00:07:45.183 Number of I/O Submission Queues: 64 00:07:45.183 Number of I/O Completion Queues: 64 00:07:45.183 00:07:45.183 ZNS Specific Controller Data 00:07:45.183 ============================ 00:07:45.183 Zone Append Size Limit: 0 00:07:45.183 00:07:45.183 00:07:45.183 Active Namespaces 00:07:45.183 ================= 00:07:45.183 Namespace ID:1 00:07:45.183 Error Recovery Timeout: Unlimited 00:07:45.183 Command Set Identifier: NVM (00h) 00:07:45.183 Deallocate: Supported 00:07:45.184 Deallocated/Unwritten Error: Supported 00:07:45.184 Deallocated Read Value: All 0x00 00:07:45.184 Deallocate in Write Zeroes: Not Supported 00:07:45.184 Deallocated Guard Field: 0xFFFF 00:07:45.184 Flush: Supported 00:07:45.184 Reservation: Not Supported 00:07:45.184 Namespace Sharing Capabilities: Multiple Controllers 00:07:45.184 Size (in LBAs): 262144 (1GiB) 00:07:45.184 Capacity (in LBAs): 262144 (1GiB) 00:07:45.184 Utilization (in LBAs): 262144 (1GiB) 00:07:45.184 Thin Provisioning: Not Supported 00:07:45.184 Per-NS Atomic Units: No 00:07:45.184 Maximum Single Source Range Length: 128 00:07:45.184 Maximum Copy Length: 128 00:07:45.184 Maximum Source Range Count: 128 00:07:45.184 NGUID/EUI64 Never Reused: No 00:07:45.184 Namespace Write Protected: No 00:07:45.184 Endurance group ID: 1 00:07:45.184 Number of LBA Formats: 8 00:07:45.184 Current LBA Format: LBA Format #04 00:07:45.184 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.184 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.184 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.184 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.184 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.184 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.184 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.184 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.184 00:07:45.184 Get Feature FDP: 00:07:45.184 ================ 00:07:45.184 Enabled: Yes 00:07:45.184 FDP configuration index: 0 00:07:45.184 00:07:45.184 FDP configurations log page 00:07:45.184 =========================== 00:07:45.184 Number of FDP configurations: 1 00:07:45.184 Version: 0 00:07:45.184 Size: 112 00:07:45.184 FDP Configuration Descriptor: 0 00:07:45.184 Descriptor Size: 96 00:07:45.184 Reclaim Group Identifier format: 2 00:07:45.184 FDP Volatile Write Cache: Not Present 00:07:45.184 FDP Configuration: Valid 00:07:45.184 Vendor Specific Size: 0 00:07:45.184 Number of Reclaim Groups: 2 00:07:45.184 Number of Recalim Unit Handles: 8 00:07:45.184 Max Placement Identifiers: 128 00:07:45.184 Number of Namespaces Suppprted: 256 00:07:45.184 Reclaim unit Nominal Size: 6000000 bytes 00:07:45.184 Estimated Reclaim Unit Time Limit: Not Reported 00:07:45.184 RUH Desc #000: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #001: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #002: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #003: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #004: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #005: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #006: RUH Type: Initially Isolated 00:07:45.184 RUH Desc #007: RUH Type: Initially Isolated 00:07:45.184 00:07:45.184 FDP reclaim unit handle usage log page 00:07:45.184 ==================================[2024-11-25 23:09:17.362465] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62869 terminated unexpected 00:07:45.184 ==== 00:07:45.184 Number of Reclaim Unit Handles: 8 00:07:45.184 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:45.184 RUH Usage Desc #001: RUH Attributes: Unused 00:07:45.184 RUH Usage Desc #002: RUH Attributes: Unused 00:07:45.184 RUH Usage Desc #003: RUH Attributes: Unused 00:07:45.184 RUH Usage Desc #004: RUH Attributes: Unused 00:07:45.184 RUH Usage Desc #005: RUH Attributes: Unused 00:07:45.184 RUH Usage Desc #006: RUH Attributes: Unused 00:07:45.184 RUH Usage Desc #007: RUH Attributes: Unused 00:07:45.184 00:07:45.184 FDP statistics log page 00:07:45.184 ======================= 00:07:45.184 Host bytes with metadata written: 508207104 00:07:45.184 Media bytes with metadata written: 508276736 00:07:45.184 Media bytes erased: 0 00:07:45.184 00:07:45.184 FDP events log page 00:07:45.184 =================== 00:07:45.184 Number of FDP events: 0 00:07:45.184 00:07:45.184 NVM Specific Namespace Data 00:07:45.184 =========================== 00:07:45.184 Logical Block Storage Tag Mask: 0 00:07:45.184 Protection Information Capabilities: 00:07:45.184 16b Guard Protection Information Storage Tag Support: No 00:07:45.184 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.184 Storage Tag Check Read Support: No 00:07:45.184 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.184 ===================================================== 00:07:45.184 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.184 ===================================================== 00:07:45.184 Controller Capabilities/Features 00:07:45.184 ================================ 00:07:45.184 Vendor ID: 1b36 00:07:45.184 Subsystem Vendor ID: 1af4 00:07:45.184 Serial Number: 12340 00:07:45.184 Model Number: QEMU NVMe Ctrl 00:07:45.184 Firmware Version: 8.0.0 00:07:45.184 Recommended Arb Burst: 6 00:07:45.184 IEEE OUI Identifier: 00 54 52 00:07:45.184 Multi-path I/O 00:07:45.184 May have multiple subsystem ports: No 00:07:45.184 May have multiple controllers: No 00:07:45.184 Associated with SR-IOV VF: No 00:07:45.184 Max Data Transfer Size: 524288 00:07:45.184 Max Number of Namespaces: 256 00:07:45.184 Max Number of I/O Queues: 64 00:07:45.184 NVMe Specification Version (VS): 1.4 00:07:45.184 NVMe Specification Version (Identify): 1.4 00:07:45.184 Maximum Queue Entries: 2048 00:07:45.184 Contiguous Queues Required: Yes 00:07:45.184 Arbitration Mechanisms Supported 00:07:45.184 Weighted Round Robin: Not Supported 00:07:45.184 Vendor Specific: Not Supported 00:07:45.184 Reset Timeout: 7500 ms 00:07:45.184 Doorbell Stride: 4 bytes 00:07:45.184 NVM Subsystem Reset: Not Supported 00:07:45.184 Command Sets Supported 00:07:45.184 NVM Command Set: Supported 00:07:45.184 Boot Partition: Not Supported 00:07:45.184 Memory Page Size Minimum: 4096 bytes 00:07:45.184 Memory Page Size Maximum: 65536 bytes 00:07:45.184 Persistent Memory Region: Not Supported 00:07:45.184 Optional Asynchronous Events Supported 00:07:45.184 Namespace Attribute Notices: Supported 00:07:45.184 Firmware Activation Notices: Not Supported 00:07:45.184 ANA Change Notices: Not Supported 00:07:45.184 PLE Aggregate Log Change Notices: Not Supported 00:07:45.184 LBA Status Info Alert Notices: Not Supported 00:07:45.184 EGE Aggregate Log Change Notices: Not Supported 00:07:45.184 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.184 Zone Descriptor Change Notices: Not Supported 00:07:45.185 Discovery Log Change Notices: Not Supported 00:07:45.185 Controller Attributes 00:07:45.185 128-bit Host Identifier: Not Supported 00:07:45.185 Non-Operational Permissive Mode: Not Supported 00:07:45.185 NVM Sets: Not Supported 00:07:45.185 Read Recovery Levels: Not Supported 00:07:45.185 Endurance Groups: Not Supported 00:07:45.185 Predictable Latency Mode: Not Supported 00:07:45.185 Traffic Based Keep ALive: Not Supported 00:07:45.185 Namespace Granularity: Not Supported 00:07:45.185 SQ Associations: Not Supported 00:07:45.185 UUID List: Not Supported 00:07:45.185 Multi-Domain Subsystem: Not Supported 00:07:45.185 Fixed Capacity Management: Not Supported 00:07:45.185 Variable Capacity Management: Not Supported 00:07:45.185 Delete Endurance Group: Not Supported 00:07:45.185 Delete NVM Set: Not Supported 00:07:45.185 Extended LBA Formats Supported: Supported 00:07:45.185 Flexible Data Placement Supported: Not Supported 00:07:45.185 00:07:45.185 Controller Memory Buffer Support 00:07:45.185 ================================ 00:07:45.185 Supported: No 00:07:45.185 00:07:45.185 Persistent Memory Region Support 00:07:45.185 ================================ 00:07:45.185 Supported: No 00:07:45.185 00:07:45.185 Admin Command Set Attributes 00:07:45.185 ============================ 00:07:45.185 Security Send/Receive: Not Supported 00:07:45.185 Format NVM: Supported 00:07:45.185 Firmware Activate/Download: Not Supported 00:07:45.185 Namespace Management: Supported 00:07:45.185 Device Self-Test: Not Supported 00:07:45.185 Directives: Supported 00:07:45.185 NVMe-MI: Not Supported 00:07:45.185 Virtualization Management: Not Supported 00:07:45.185 Doorbell Buffer Config: Supported 00:07:45.185 Get LBA Status Capability: Not Supported 00:07:45.185 Command & Feature Lockdown Capability: Not Supported 00:07:45.185 Abort Command Limit: 4 00:07:45.185 Async Event Request Limit: 4 00:07:45.185 Number of Firmware Slots: N/A 00:07:45.185 Firmware Slot 1 Read-Only: N/A 00:07:45.185 Firmware Activation Without Reset: N/A 00:07:45.185 Multiple Update Detection Support: N/A 00:07:45.185 Firmware Update Granularity: No Information Provided 00:07:45.185 Per-Namespace SMART Log: Yes 00:07:45.185 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.185 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:45.185 Command Effects Log Page: Supported 00:07:45.185 Get Log Page Extended Data: Supported 00:07:45.185 Telemetry Log Pages: Not Supported 00:07:45.185 Persistent Event Log Pages: Not Supported 00:07:45.185 Supported Log Pages Log Page: May Support 00:07:45.185 Commands Supported & Effects Log Page: Not Supported 00:07:45.185 Feature Identifiers & Effects Log Page:May Support 00:07:45.185 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.185 Data Area 4 for Telemetry Log: Not Supported 00:07:45.185 Error Log Page Entries Supported: 1 00:07:45.185 Keep Alive: Not Supported 00:07:45.185 00:07:45.185 NVM Command Set Attributes 00:07:45.185 ========================== 00:07:45.185 Submission Queue Entry Size 00:07:45.185 Max: 64 00:07:45.185 Min: 64 00:07:45.185 Completion Queue Entry Size 00:07:45.185 Max: 16 00:07:45.185 Min: 16 00:07:45.185 Number of Namespaces: 256 00:07:45.185 Compare Command: Supported 00:07:45.185 Write Uncorrectable Command: Not Supported 00:07:45.185 Dataset Management Command: Supported 00:07:45.185 Write Zeroes Command: Supported 00:07:45.185 Set Features Save Field: Supported 00:07:45.185 Reservations: Not Supported 00:07:45.185 Timestamp: Supported 00:07:45.185 Copy: Supported 00:07:45.185 Volatile Write Cache: Present 00:07:45.185 Atomic Write Unit (Normal): 1 00:07:45.185 Atomic Write Unit (PFail): 1 00:07:45.185 Atomic Compare & Write Unit: 1 00:07:45.185 Fused Compare & Write: Not Supported 00:07:45.185 Scatter-Gather List 00:07:45.185 SGL Command Set: Supported 00:07:45.185 SGL Keyed: Not Supported 00:07:45.185 SGL Bit Bucket Descriptor: Not Supported 00:07:45.185 SGL Metadata Pointer: Not Supported 00:07:45.185 Oversized SGL: Not Supported 00:07:45.185 SGL Metadata Address: Not Supported 00:07:45.185 SGL Offset: Not Supported 00:07:45.185 Transport SGL Data Block: Not Supported 00:07:45.185 Replay Protected Memory Block: Not Supported 00:07:45.185 00:07:45.185 Firmware Slot Information 00:07:45.185 ========================= 00:07:45.185 Active slot: 1 00:07:45.185 Slot 1 Firmware Revision: 1.0 00:07:45.185 00:07:45.185 00:07:45.185 Commands Supported and Effects 00:07:45.185 ============================== 00:07:45.185 Admin Commands 00:07:45.185 -------------- 00:07:45.185 Delete I/O Submission Queue (00h): Supported 00:07:45.185 Create I/O Submission Queue (01h): Supported 00:07:45.185 Get Log Page (02h): Supported 00:07:45.185 Delete I/O Completion Queue (04h): Supported 00:07:45.185 Create I/O Completion Queue (05h): Supported 00:07:45.185 Identify (06h): Supported 00:07:45.185 Abort (08h): Supported 00:07:45.185 Set Features (09h): Supported 00:07:45.185 Get Features (0Ah): Supported 00:07:45.185 Asynchronous Event Request (0Ch): Supported 00:07:45.185 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.185 Directive Send (19h): Supported 00:07:45.185 Directive Receive (1Ah): Supported 00:07:45.185 Virtualization Management (1Ch): Supported 00:07:45.185 Doorbell Buffer Config (7Ch): Supported 00:07:45.185 Format NVM (80h): Supported LBA-Change 00:07:45.185 I/O Commands 00:07:45.185 ------------ 00:07:45.185 Flush (00h): Supported LBA-Change 00:07:45.185 Write (01h): Supported LBA-Change 00:07:45.185 Read (02h): Supported 00:07:45.185 Compare (05h): Supported 00:07:45.185 Write Zeroes (08h): Supported LBA-Change 00:07:45.185 Dataset Management (09h): Supported LBA-Change 00:07:45.185 Unknown (0Ch): Supported 00:07:45.185 Unknown (12h): Supported 00:07:45.185 Copy (19h): Supported LBA-Change 00:07:45.185 Unknown (1Dh): Supported LBA-Change 00:07:45.185 00:07:45.185 Error Log 00:07:45.185 ========= 00:07:45.185 00:07:45.185 Arbitration 00:07:45.185 =========== 00:07:45.185 Arbitration Burst: no limit 00:07:45.185 00:07:45.185 Power Management 00:07:45.185 ================ 00:07:45.185 Number of Power States: 1 00:07:45.185 Current Power State: Power State #0 00:07:45.186 Power State #0: 00:07:45.186 Max Power: 25.00 W 00:07:45.186 Non-Operational State: Operational 00:07:45.186 Entry Latency: 16 microseconds 00:07:45.186 Exit Latency: 4 microseconds 00:07:45.186 Relative Read Throughput: 0 00:07:45.186 Relative Read Latency: 0 00:07:45.186 Relative Write Throughput: 0 00:07:45.186 Relative Write Latency: 0 00:07:45.186 Idle Power: Not Reported 00:07:45.186 Active Power: Not Reported 00:07:45.186 Non-Operational Permissive Mode: Not Supported 00:07:45.186 00:07:45.186 Health Information 00:07:45.186 ================== 00:07:45.186 Critical Warnings: 00:07:45.186 Available Spare Space: OK 00:07:45.186 Temperature: OK 00:07:45.186 Device Reliability: OK 00:07:45.186 Read Only: No 00:07:45.186 Volatile Memory Backup: OK 00:07:45.186 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.186 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.186 Available Spare: 0% 00:07:45.186 Available Spare Threshold: 0% 00:07:45.186 Life Percentage Used: 0% 00:07:45.186 Data Units Read: 636 00:07:45.186 Data Units Written: 565 00:07:45.186 Host Read Commands: 34271 00:07:45.186 Host Write Commands: 34057 00:07:45.186 Controller Busy Time: 0 minutes 00:07:45.186 Power Cycles: 0 00:07:45.186 Power On Hours: 0 hours 00:07:45.186 Unsafe Shutdowns: 0 00:07:45.186 Unrecoverable Media Errors: 0 00:07:45.186 Lifetime Error Log Entries: 0 00:07:45.186 Warning Temperature Time: 0 minutes 00:07:45.186 Critical Temperature Time: 0 minutes 00:07:45.186 00:07:45.186 Number of Queues 00:07:45.186 ================ 00:07:45.186 Number of I/O Submission Queues: 64 00:07:45.186 Number of I/O Completion Queues: 64 00:07:45.186 00:07:45.186 ZNS Specific Controller Data 00:07:45.186 ============================ 00:07:45.186 Zone Append Size Limit: 0 00:07:45.186 00:07:45.186 00:07:45.186 Active Namespaces 00:07:45.186 ================= 00:07:45.186 Namespace ID:1 00:07:45.186 Error Recovery Timeout: Unlimited 00:07:45.186 Command Set Identifier: NVM (00h) 00:07:45.186 Deallocate: Supported 00:07:45.186 Deallocated/Unwritten Error: Supported 00:07:45.186 Deallocated Read Value: All 0x00 00:07:45.186 Deallocate in Write Zeroes: Not Supported 00:07:45.186 Deallocated Guard Field: 0xFFFF 00:07:45.186 Flush: Supported 00:07:45.186 Reservation: Not Supported 00:07:45.186 Metadata Transferred as: Separate Metadata Buffer 00:07:45.186 Namespace Sharing Capabilities: Private 00:07:45.186 Size (in LBAs): 1548666 (5GiB) 00:07:45.186 Capacity (in LBAs): 1548666 (5GiB) 00:07:45.186 Utilization (in LBAs): 1548666 (5GiB) 00:07:45.186 Thin Provisioning: Not Supported 00:07:45.186 Per-NS Atomic Units: No 00:07:45.186 Maximum Single Source Range Length: 128 00:07:45.186 Maximum Copy Length: 128 00:07:45.186 Maximum Source Range Count: 128 00:07:45.186 NGUID/EUI64 Never Reused: No 00:07:45.186 Namespace Write Protected: No 00:07:45.186 Number of LBA Formats: 8 00:07:45.186 Current LBA Format: [2024-11-25 23:09:17.363825] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62869 terminated unexpected 00:07:45.186 LBA Format #07 00:07:45.186 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.186 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.186 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.186 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.186 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.186 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.186 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.186 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.186 00:07:45.186 NVM Specific Namespace Data 00:07:45.186 =========================== 00:07:45.186 Logical Block Storage Tag Mask: 0 00:07:45.186 Protection Information Capabilities: 00:07:45.186 16b Guard Protection Information Storage Tag Support: No 00:07:45.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.186 Storage Tag Check Read Support: No 00:07:45.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.186 ===================================================== 00:07:45.186 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.186 ===================================================== 00:07:45.186 Controller Capabilities/Features 00:07:45.186 ================================ 00:07:45.186 Vendor ID: 1b36 00:07:45.186 Subsystem Vendor ID: 1af4 00:07:45.186 Serial Number: 12341 00:07:45.186 Model Number: QEMU NVMe Ctrl 00:07:45.186 Firmware Version: 8.0.0 00:07:45.186 Recommended Arb Burst: 6 00:07:45.186 IEEE OUI Identifier: 00 54 52 00:07:45.186 Multi-path I/O 00:07:45.186 May have multiple subsystem ports: No 00:07:45.186 May have multiple controllers: No 00:07:45.186 Associated with SR-IOV VF: No 00:07:45.186 Max Data Transfer Size: 524288 00:07:45.186 Max Number of Namespaces: 256 00:07:45.186 Max Number of I/O Queues: 64 00:07:45.186 NVMe Specification Version (VS): 1.4 00:07:45.186 NVMe Specification Version (Identify): 1.4 00:07:45.186 Maximum Queue Entries: 2048 00:07:45.186 Contiguous Queues Required: Yes 00:07:45.186 Arbitration Mechanisms Supported 00:07:45.186 Weighted Round Robin: Not Supported 00:07:45.186 Vendor Specific: Not Supported 00:07:45.186 Reset Timeout: 7500 ms 00:07:45.186 Doorbell Stride: 4 bytes 00:07:45.186 NVM Subsystem Reset: Not Supported 00:07:45.186 Command Sets Supported 00:07:45.186 NVM Command Set: Supported 00:07:45.186 Boot Partition: Not Supported 00:07:45.186 Memory Page Size Minimum: 4096 bytes 00:07:45.186 Memory Page Size Maximum: 65536 bytes 00:07:45.186 Persistent Memory Region: Not Supported 00:07:45.186 Optional Asynchronous Events Supported 00:07:45.186 Namespace Attribute Notices: Supported 00:07:45.186 Firmware Activation Notices: Not Supported 00:07:45.187 ANA Change Notices: Not Supported 00:07:45.187 PLE Aggregate Log Change Notices: Not Supported 00:07:45.187 LBA Status Info Alert Notices: Not Supported 00:07:45.187 EGE Aggregate Log Change Notices: Not Supported 00:07:45.187 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.187 Zone Descriptor Change Notices: Not Supported 00:07:45.187 Discovery Log Change Notices: Not Supported 00:07:45.187 Controller Attributes 00:07:45.187 128-bit Host Identifier: Not Supported 00:07:45.187 Non-Operational Permissive Mode: Not Supported 00:07:45.187 NVM Sets: Not Supported 00:07:45.187 Read Recovery Levels: Not Supported 00:07:45.187 Endurance Groups: Not Supported 00:07:45.187 Predictable Latency Mode: Not Supported 00:07:45.187 Traffic Based Keep ALive: Not Supported 00:07:45.187 Namespace Granularity: Not Supported 00:07:45.187 SQ Associations: Not Supported 00:07:45.187 UUID List: Not Supported 00:07:45.187 Multi-Domain Subsystem: Not Supported 00:07:45.187 Fixed Capacity Management: Not Supported 00:07:45.187 Variable Capacity Management: Not Supported 00:07:45.187 Delete Endurance Group: Not Supported 00:07:45.187 Delete NVM Set: Not Supported 00:07:45.187 Extended LBA Formats Supported: Supported 00:07:45.187 Flexible Data Placement Supported: Not Supported 00:07:45.187 00:07:45.187 Controller Memory Buffer Support 00:07:45.187 ================================ 00:07:45.187 Supported: No 00:07:45.187 00:07:45.187 Persistent Memory Region Support 00:07:45.187 ================================ 00:07:45.187 Supported: No 00:07:45.187 00:07:45.187 Admin Command Set Attributes 00:07:45.187 ============================ 00:07:45.187 Security Send/Receive: Not Supported 00:07:45.187 Format NVM: Supported 00:07:45.187 Firmware Activate/Download: Not Supported 00:07:45.187 Namespace Management: Supported 00:07:45.187 Device Self-Test: Not Supported 00:07:45.187 Directives: Supported 00:07:45.187 NVMe-MI: Not Supported 00:07:45.187 Virtualization Management: Not Supported 00:07:45.187 Doorbell Buffer Config: Supported 00:07:45.187 Get LBA Status Capability: Not Supported 00:07:45.187 Command & Feature Lockdown Capability: Not Supported 00:07:45.187 Abort Command Limit: 4 00:07:45.187 Async Event Request Limit: 4 00:07:45.187 Number of Firmware Slots: N/A 00:07:45.187 Firmware Slot 1 Read-Only: N/A 00:07:45.187 Firmware Activation Without Reset: N/A 00:07:45.187 Multiple Update Detection Support: N/A 00:07:45.187 Firmware Update Granularity: No Information Provided 00:07:45.187 Per-Namespace SMART Log: Yes 00:07:45.187 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.187 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:45.187 Command Effects Log Page: Supported 00:07:45.187 Get Log Page Extended Data: Supported 00:07:45.187 Telemetry Log Pages: Not Supported 00:07:45.187 Persistent Event Log Pages: Not Supported 00:07:45.187 Supported Log Pages Log Page: May Support 00:07:45.187 Commands Supported & Effects Log Page: Not Supported 00:07:45.187 Feature Identifiers & Effects Log Page:May Support 00:07:45.187 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.187 Data Area 4 for Telemetry Log: Not Supported 00:07:45.187 Error Log Page Entries Supported: 1 00:07:45.187 Keep Alive: Not Supported 00:07:45.187 00:07:45.187 NVM Command Set Attributes 00:07:45.187 ========================== 00:07:45.187 Submission Queue Entry Size 00:07:45.187 Max: 64 00:07:45.187 Min: 64 00:07:45.187 Completion Queue Entry Size 00:07:45.187 Max: 16 00:07:45.187 Min: 16 00:07:45.187 Number of Namespaces: 256 00:07:45.187 Compare Command: Supported 00:07:45.187 Write Uncorrectable Command: Not Supported 00:07:45.187 Dataset Management Command: Supported 00:07:45.187 Write Zeroes Command: Supported 00:07:45.187 Set Features Save Field: Supported 00:07:45.187 Reservations: Not Supported 00:07:45.187 Timestamp: Supported 00:07:45.187 Copy: Supported 00:07:45.187 Volatile Write Cache: Present 00:07:45.187 Atomic Write Unit (Normal): 1 00:07:45.187 Atomic Write Unit (PFail): 1 00:07:45.187 Atomic Compare & Write Unit: 1 00:07:45.187 Fused Compare & Write: Not Supported 00:07:45.187 Scatter-Gather List 00:07:45.187 SGL Command Set: Supported 00:07:45.187 SGL Keyed: Not Supported 00:07:45.187 SGL Bit Bucket Descriptor: Not Supported 00:07:45.187 SGL Metadata Pointer: Not Supported 00:07:45.187 Oversized SGL: Not Supported 00:07:45.187 SGL Metadata Address: Not Supported 00:07:45.187 SGL Offset: Not Supported 00:07:45.187 Transport SGL Data Block: Not Supported 00:07:45.187 Replay Protected Memory Block: Not Supported 00:07:45.187 00:07:45.187 Firmware Slot Information 00:07:45.187 ========================= 00:07:45.187 Active slot: 1 00:07:45.187 Slot 1 Firmware Revision: 1.0 00:07:45.187 00:07:45.187 00:07:45.187 Commands Supported and Effects 00:07:45.187 ============================== 00:07:45.187 Admin Commands 00:07:45.187 -------------- 00:07:45.187 Delete I/O Submission Queue (00h): Supported 00:07:45.187 Create I/O Submission Queue (01h): Supported 00:07:45.187 Get Log Page (02h): Supported 00:07:45.187 Delete I/O Completion Queue (04h): Supported 00:07:45.187 Create I/O Completion Queue (05h): Supported 00:07:45.187 Identify (06h): Supported 00:07:45.187 Abort (08h): Supported 00:07:45.187 Set Features (09h): Supported 00:07:45.187 Get Features (0Ah): Supported 00:07:45.187 Asynchronous Event Request (0Ch): Supported 00:07:45.187 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.187 Directive Send (19h): Supported 00:07:45.187 Directive Receive (1Ah): Supported 00:07:45.187 Virtualization Management (1Ch): Supported 00:07:45.187 Doorbell Buffer Config (7Ch): Supported 00:07:45.187 Format NVM (80h): Supported LBA-Change 00:07:45.187 I/O Commands 00:07:45.187 ------------ 00:07:45.187 Flush (00h): Supported LBA-Change 00:07:45.187 Write (01h): Supported LBA-Change 00:07:45.187 Read (02h): Supported 00:07:45.187 Compare (05h): Supported 00:07:45.187 Write Zeroes (08h): Supported LBA-Change 00:07:45.187 Dataset Management (09h): Supported LBA-Change 00:07:45.187 Unknown (0Ch): Supported 00:07:45.187 Unknown (12h): Supported 00:07:45.187 Copy (19h): Supported LBA-Change 00:07:45.188 Unknown (1Dh): Supported LBA-Change 00:07:45.188 00:07:45.188 Error Log 00:07:45.188 ========= 00:07:45.188 00:07:45.188 Arbitration 00:07:45.188 =========== 00:07:45.188 Arbitration Burst: no limit 00:07:45.188 00:07:45.188 Power Management 00:07:45.188 ================ 00:07:45.188 Number of Power States: 1 00:07:45.188 Current Power State: Power State #0 00:07:45.188 Power State #0: 00:07:45.188 Max Power: 25.00 W 00:07:45.188 Non-Operational State: Operational 00:07:45.188 Entry Latency: 16 microseconds 00:07:45.188 Exit Latency: 4 microseconds 00:07:45.188 Relative Read Throughput: 0 00:07:45.188 Relative Read Latency: 0 00:07:45.188 Relative Write Throughput: 0 00:07:45.188 Relative Write Latency: 0 00:07:45.188 Idle Power: Not Reported 00:07:45.188 Active Power: Not Reported 00:07:45.188 Non-Operational Permissive Mode: Not Supported 00:07:45.188 00:07:45.188 Health Information 00:07:45.188 ================== 00:07:45.188 Critical Warnings: 00:07:45.188 Available Spare Space: OK 00:07:45.188 Temperature: OK 00:07:45.188 Device Reliability: OK 00:07:45.188 Read Only: No 00:07:45.188 Volatile Memory Backup: OK 00:07:45.188 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.188 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.188 Available Spare: 0% 00:07:45.188 Available Spare Threshold: 0% 00:07:45.188 Life Percentage Used: 0% 00:07:45.188 Data Units Read: 958 00:07:45.188 Data Units Written: 831 00:07:45.188 Host Read Commands: 50027 00:07:45.188 Host Write Commands: 48924 00:07:45.188 Controller Busy Time: 0 minutes 00:07:45.188 Power Cycles: 0 00:07:45.188 Power On Hours: 0 hours 00:07:45.188 Unsafe Shutdowns: 0 00:07:45.188 Unrecoverable Media Errors: 0 00:07:45.188 Lifetime Error Log Entries: 0 00:07:45.188 Warning Temperature Time: 0 minutes 00:07:45.188 Critical Temperature Time: 0 minutes 00:07:45.188 00:07:45.188 Number of Queues 00:07:45.188 ================ 00:07:45.188 Number of I/O Submission Queues: 64 00:07:45.188 Number of I/O Completion Queues: 64 00:07:45.188 00:07:45.188 ZNS Specific Controller Data 00:07:45.188 ============================ 00:07:45.188 Zone Append Size Limit: 0 00:07:45.188 00:07:45.188 00:07:45.188 Active Namespaces 00:07:45.188 ================= 00:07:45.188 Namespace ID:1 00:07:45.188 Error Recovery Timeout: Unlimited 00:07:45.188 Command Set Identifier: NVM (00h) 00:07:45.188 Deallocate: Supported 00:07:45.188 Deallocated/Unwritten Error: Supported 00:07:45.188 Deallocated Read Value: All 0x00 00:07:45.188 Deallocate in Write Zeroes: Not Supported 00:07:45.188 Deallocated Guard Field: 0xFFFF 00:07:45.188 Flush: Supported 00:07:45.188 Reservation: Not Supported 00:07:45.188 Namespace Sharing Capabilities: Private 00:07:45.188 Size (in LBAs): 1310720 (5GiB) 00:07:45.188 Capacity (in LBAs): 1310720 (5GiB) 00:07:45.188 Utilization (in LBAs): 1310720 (5GiB) 00:07:45.188 Thin Provisioning: Not Supported 00:07:45.188 Per-NS Atomic Units: No 00:07:45.188 Maximum Single Source Range Length: 128 00:07:45.188 Maximum Copy Length: 128 00:07:45.188 Maximum Source Range Count: 128 00:07:45.188 NGUID/EUI64 Never Reused: No 00:07:45.188 Namespace Write Protected: No 00:07:45.188 Number of LBA Formats: 8 00:07:45.188 Current LBA Format: LBA Format #04 00:07:45.188 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.188 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.188 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.188 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.188 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.188 LBA Format[2024-11-25 23:09:17.365146] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62869 terminated unexpected 00:07:45.188 #05: Data Size: 4096 Metadata Size: 8 00:07:45.188 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.188 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.188 00:07:45.188 NVM Specific Namespace Data 00:07:45.188 =========================== 00:07:45.188 Logical Block Storage Tag Mask: 0 00:07:45.188 Protection Information Capabilities: 00:07:45.188 16b Guard Protection Information Storage Tag Support: No 00:07:45.188 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.188 Storage Tag Check Read Support: No 00:07:45.188 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.188 ===================================================== 00:07:45.188 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.188 ===================================================== 00:07:45.188 Controller Capabilities/Features 00:07:45.188 ================================ 00:07:45.188 Vendor ID: 1b36 00:07:45.188 Subsystem Vendor ID: 1af4 00:07:45.188 Serial Number: 12342 00:07:45.188 Model Number: QEMU NVMe Ctrl 00:07:45.188 Firmware Version: 8.0.0 00:07:45.188 Recommended Arb Burst: 6 00:07:45.188 IEEE OUI Identifier: 00 54 52 00:07:45.188 Multi-path I/O 00:07:45.188 May have multiple subsystem ports: No 00:07:45.188 May have multiple controllers: No 00:07:45.188 Associated with SR-IOV VF: No 00:07:45.188 Max Data Transfer Size: 524288 00:07:45.188 Max Number of Namespaces: 256 00:07:45.188 Max Number of I/O Queues: 64 00:07:45.188 NVMe Specification Version (VS): 1.4 00:07:45.188 NVMe Specification Version (Identify): 1.4 00:07:45.188 Maximum Queue Entries: 2048 00:07:45.188 Contiguous Queues Required: Yes 00:07:45.188 Arbitration Mechanisms Supported 00:07:45.188 Weighted Round Robin: Not Supported 00:07:45.188 Vendor Specific: Not Supported 00:07:45.188 Reset Timeout: 7500 ms 00:07:45.188 Doorbell Stride: 4 bytes 00:07:45.189 NVM Subsystem Reset: Not Supported 00:07:45.189 Command Sets Supported 00:07:45.189 NVM Command Set: Supported 00:07:45.189 Boot Partition: Not Supported 00:07:45.189 Memory Page Size Minimum: 4096 bytes 00:07:45.189 Memory Page Size Maximum: 65536 bytes 00:07:45.189 Persistent Memory Region: Not Supported 00:07:45.189 Optional Asynchronous Events Supported 00:07:45.189 Namespace Attribute Notices: Supported 00:07:45.189 Firmware Activation Notices: Not Supported 00:07:45.189 ANA Change Notices: Not Supported 00:07:45.189 PLE Aggregate Log Change Notices: Not Supported 00:07:45.189 LBA Status Info Alert Notices: Not Supported 00:07:45.189 EGE Aggregate Log Change Notices: Not Supported 00:07:45.189 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.189 Zone Descriptor Change Notices: Not Supported 00:07:45.189 Discovery Log Change Notices: Not Supported 00:07:45.189 Controller Attributes 00:07:45.189 128-bit Host Identifier: Not Supported 00:07:45.189 Non-Operational Permissive Mode: Not Supported 00:07:45.189 NVM Sets: Not Supported 00:07:45.189 Read Recovery Levels: Not Supported 00:07:45.189 Endurance Groups: Not Supported 00:07:45.189 Predictable Latency Mode: Not Supported 00:07:45.189 Traffic Based Keep ALive: Not Supported 00:07:45.189 Namespace Granularity: Not Supported 00:07:45.189 SQ Associations: Not Supported 00:07:45.189 UUID List: Not Supported 00:07:45.189 Multi-Domain Subsystem: Not Supported 00:07:45.189 Fixed Capacity Management: Not Supported 00:07:45.189 Variable Capacity Management: Not Supported 00:07:45.189 Delete Endurance Group: Not Supported 00:07:45.189 Delete NVM Set: Not Supported 00:07:45.189 Extended LBA Formats Supported: Supported 00:07:45.189 Flexible Data Placement Supported: Not Supported 00:07:45.189 00:07:45.189 Controller Memory Buffer Support 00:07:45.189 ================================ 00:07:45.189 Supported: No 00:07:45.189 00:07:45.189 Persistent Memory Region Support 00:07:45.189 ================================ 00:07:45.189 Supported: No 00:07:45.189 00:07:45.189 Admin Command Set Attributes 00:07:45.189 ============================ 00:07:45.189 Security Send/Receive: Not Supported 00:07:45.189 Format NVM: Supported 00:07:45.189 Firmware Activate/Download: Not Supported 00:07:45.189 Namespace Management: Supported 00:07:45.189 Device Self-Test: Not Supported 00:07:45.189 Directives: Supported 00:07:45.189 NVMe-MI: Not Supported 00:07:45.189 Virtualization Management: Not Supported 00:07:45.189 Doorbell Buffer Config: Supported 00:07:45.189 Get LBA Status Capability: Not Supported 00:07:45.189 Command & Feature Lockdown Capability: Not Supported 00:07:45.189 Abort Command Limit: 4 00:07:45.189 Async Event Request Limit: 4 00:07:45.189 Number of Firmware Slots: N/A 00:07:45.189 Firmware Slot 1 Read-Only: N/A 00:07:45.189 Firmware Activation Without Reset: N/A 00:07:45.189 Multiple Update Detection Support: N/A 00:07:45.189 Firmware Update Granularity: No Information Provided 00:07:45.189 Per-Namespace SMART Log: Yes 00:07:45.189 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.189 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:45.189 Command Effects Log Page: Supported 00:07:45.189 Get Log Page Extended Data: Supported 00:07:45.189 Telemetry Log Pages: Not Supported 00:07:45.189 Persistent Event Log Pages: Not Supported 00:07:45.189 Supported Log Pages Log Page: May Support 00:07:45.189 Commands Supported & Effects Log Page: Not Supported 00:07:45.189 Feature Identifiers & Effects Log Page:May Support 00:07:45.189 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.189 Data Area 4 for Telemetry Log: Not Supported 00:07:45.189 Error Log Page Entries Supported: 1 00:07:45.189 Keep Alive: Not Supported 00:07:45.189 00:07:45.189 NVM Command Set Attributes 00:07:45.189 ========================== 00:07:45.189 Submission Queue Entry Size 00:07:45.189 Max: 64 00:07:45.189 Min: 64 00:07:45.189 Completion Queue Entry Size 00:07:45.189 Max: 16 00:07:45.189 Min: 16 00:07:45.189 Number of Namespaces: 256 00:07:45.189 Compare Command: Supported 00:07:45.189 Write Uncorrectable Command: Not Supported 00:07:45.189 Dataset Management Command: Supported 00:07:45.189 Write Zeroes Command: Supported 00:07:45.189 Set Features Save Field: Supported 00:07:45.189 Reservations: Not Supported 00:07:45.189 Timestamp: Supported 00:07:45.189 Copy: Supported 00:07:45.189 Volatile Write Cache: Present 00:07:45.189 Atomic Write Unit (Normal): 1 00:07:45.189 Atomic Write Unit (PFail): 1 00:07:45.189 Atomic Compare & Write Unit: 1 00:07:45.189 Fused Compare & Write: Not Supported 00:07:45.189 Scatter-Gather List 00:07:45.189 SGL Command Set: Supported 00:07:45.190 SGL Keyed: Not Supported 00:07:45.190 SGL Bit Bucket Descriptor: Not Supported 00:07:45.190 SGL Metadata Pointer: Not Supported 00:07:45.190 Oversized SGL: Not Supported 00:07:45.190 SGL Metadata Address: Not Supported 00:07:45.190 SGL Offset: Not Supported 00:07:45.190 Transport SGL Data Block: Not Supported 00:07:45.190 Replay Protected Memory Block: Not Supported 00:07:45.190 00:07:45.190 Firmware Slot Information 00:07:45.190 ========================= 00:07:45.190 Active slot: 1 00:07:45.190 Slot 1 Firmware Revision: 1.0 00:07:45.190 00:07:45.190 00:07:45.190 Commands Supported and Effects 00:07:45.190 ============================== 00:07:45.190 Admin Commands 00:07:45.190 -------------- 00:07:45.190 Delete I/O Submission Queue (00h): Supported 00:07:45.190 Create I/O Submission Queue (01h): Supported 00:07:45.190 Get Log Page (02h): Supported 00:07:45.190 Delete I/O Completion Queue (04h): Supported 00:07:45.190 Create I/O Completion Queue (05h): Supported 00:07:45.190 Identify (06h): Supported 00:07:45.190 Abort (08h): Supported 00:07:45.190 Set Features (09h): Supported 00:07:45.190 Get Features (0Ah): Supported 00:07:45.190 Asynchronous Event Request (0Ch): Supported 00:07:45.190 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.190 Directive Send (19h): Supported 00:07:45.190 Directive Receive (1Ah): Supported 00:07:45.190 Virtualization Management (1Ch): Supported 00:07:45.190 Doorbell Buffer Config (7Ch): Supported 00:07:45.190 Format NVM (80h): Supported LBA-Change 00:07:45.190 I/O Commands 00:07:45.190 ------------ 00:07:45.190 Flush (00h): Supported LBA-Change 00:07:45.190 Write (01h): Supported LBA-Change 00:07:45.190 Read (02h): Supported 00:07:45.190 Compare (05h): Supported 00:07:45.190 Write Zeroes (08h): Supported LBA-Change 00:07:45.190 Dataset Management (09h): Supported LBA-Change 00:07:45.190 Unknown (0Ch): Supported 00:07:45.190 Unknown (12h): Supported 00:07:45.190 Copy (19h): Supported LBA-Change 00:07:45.190 Unknown (1Dh): Supported LBA-Change 00:07:45.190 00:07:45.190 Error Log 00:07:45.190 ========= 00:07:45.190 00:07:45.190 Arbitration 00:07:45.190 =========== 00:07:45.190 Arbitration Burst: no limit 00:07:45.190 00:07:45.190 Power Management 00:07:45.190 ================ 00:07:45.190 Number of Power States: 1 00:07:45.190 Current Power State: Power State #0 00:07:45.190 Power State #0: 00:07:45.190 Max Power: 25.00 W 00:07:45.190 Non-Operational State: Operational 00:07:45.190 Entry Latency: 16 microseconds 00:07:45.190 Exit Latency: 4 microseconds 00:07:45.190 Relative Read Throughput: 0 00:07:45.190 Relative Read Latency: 0 00:07:45.190 Relative Write Throughput: 0 00:07:45.190 Relative Write Latency: 0 00:07:45.190 Idle Power: Not Reported 00:07:45.190 Active Power: Not Reported 00:07:45.190 Non-Operational Permissive Mode: Not Supported 00:07:45.190 00:07:45.190 Health Information 00:07:45.190 ================== 00:07:45.190 Critical Warnings: 00:07:45.190 Available Spare Space: OK 00:07:45.190 Temperature: OK 00:07:45.190 Device Reliability: OK 00:07:45.190 Read Only: No 00:07:45.190 Volatile Memory Backup: OK 00:07:45.190 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.190 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.190 Available Spare: 0% 00:07:45.190 Available Spare Threshold: 0% 00:07:45.190 Life Percentage Used: 0% 00:07:45.190 Data Units Read: 2097 00:07:45.190 Data Units Written: 1884 00:07:45.190 Host Read Commands: 104939 00:07:45.190 Host Write Commands: 103211 00:07:45.190 Controller Busy Time: 0 minutes 00:07:45.190 Power Cycles: 0 00:07:45.190 Power On Hours: 0 hours 00:07:45.190 Unsafe Shutdowns: 0 00:07:45.190 Unrecoverable Media Errors: 0 00:07:45.190 Lifetime Error Log Entries: 0 00:07:45.190 Warning Temperature Time: 0 minutes 00:07:45.190 Critical Temperature Time: 0 minutes 00:07:45.190 00:07:45.190 Number of Queues 00:07:45.190 ================ 00:07:45.190 Number of I/O Submission Queues: 64 00:07:45.190 Number of I/O Completion Queues: 64 00:07:45.190 00:07:45.190 ZNS Specific Controller Data 00:07:45.190 ============================ 00:07:45.190 Zone Append Size Limit: 0 00:07:45.190 00:07:45.190 00:07:45.190 Active Namespaces 00:07:45.190 ================= 00:07:45.190 Namespace ID:1 00:07:45.190 Error Recovery Timeout: Unlimited 00:07:45.190 Command Set Identifier: NVM (00h) 00:07:45.190 Deallocate: Supported 00:07:45.190 Deallocated/Unwritten Error: Supported 00:07:45.190 Deallocated Read Value: All 0x00 00:07:45.190 Deallocate in Write Zeroes: Not Supported 00:07:45.190 Deallocated Guard Field: 0xFFFF 00:07:45.190 Flush: Supported 00:07:45.190 Reservation: Not Supported 00:07:45.190 Namespace Sharing Capabilities: Private 00:07:45.190 Size (in LBAs): 1048576 (4GiB) 00:07:45.190 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.190 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.190 Thin Provisioning: Not Supported 00:07:45.190 Per-NS Atomic Units: No 00:07:45.190 Maximum Single Source Range Length: 128 00:07:45.190 Maximum Copy Length: 128 00:07:45.190 Maximum Source Range Count: 128 00:07:45.190 NGUID/EUI64 Never Reused: No 00:07:45.190 Namespace Write Protected: No 00:07:45.190 Number of LBA Formats: 8 00:07:45.190 Current LBA Format: LBA Format #04 00:07:45.190 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.190 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.190 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.190 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.190 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.190 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.190 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.190 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.190 00:07:45.190 NVM Specific Namespace Data 00:07:45.190 =========================== 00:07:45.190 Logical Block Storage Tag Mask: 0 00:07:45.190 Protection Information Capabilities: 00:07:45.190 16b Guard Protection Information Storage Tag Support: No 00:07:45.190 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.190 Storage Tag Check Read Support: No 00:07:45.190 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.190 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.190 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.190 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Namespace ID:2 00:07:45.191 Error Recovery Timeout: Unlimited 00:07:45.191 Command Set Identifier: NVM (00h) 00:07:45.191 Deallocate: Supported 00:07:45.191 Deallocated/Unwritten Error: Supported 00:07:45.191 Deallocated Read Value: All 0x00 00:07:45.191 Deallocate in Write Zeroes: Not Supported 00:07:45.191 Deallocated Guard Field: 0xFFFF 00:07:45.191 Flush: Supported 00:07:45.191 Reservation: Not Supported 00:07:45.191 Namespace Sharing Capabilities: Private 00:07:45.191 Size (in LBAs): 1048576 (4GiB) 00:07:45.191 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.191 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.191 Thin Provisioning: Not Supported 00:07:45.191 Per-NS Atomic Units: No 00:07:45.191 Maximum Single Source Range Length: 128 00:07:45.191 Maximum Copy Length: 128 00:07:45.191 Maximum Source Range Count: 128 00:07:45.191 NGUID/EUI64 Never Reused: No 00:07:45.191 Namespace Write Protected: No 00:07:45.191 Number of LBA Formats: 8 00:07:45.191 Current LBA Format: LBA Format #04 00:07:45.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.191 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.191 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.191 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.191 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.191 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.191 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.191 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.191 00:07:45.191 NVM Specific Namespace Data 00:07:45.191 =========================== 00:07:45.191 Logical Block Storage Tag Mask: 0 00:07:45.191 Protection Information Capabilities: 00:07:45.191 16b Guard Protection Information Storage Tag Support: No 00:07:45.191 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.191 Storage Tag Check Read Support: No 00:07:45.191 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Namespace ID:3 00:07:45.191 Error Recovery Timeout: Unlimited 00:07:45.191 Command Set Identifier: NVM (00h) 00:07:45.191 Deallocate: Supported 00:07:45.191 Deallocated/Unwritten Error: Supported 00:07:45.191 Deallocated Read Value: All 0x00 00:07:45.191 Deallocate in Write Zeroes: Not Supported 00:07:45.191 Deallocated Guard Field: 0xFFFF 00:07:45.191 Flush: Supported 00:07:45.191 Reservation: Not Supported 00:07:45.191 Namespace Sharing Capabilities: Private 00:07:45.191 Size (in LBAs): 1048576 (4GiB) 00:07:45.191 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.191 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.191 Thin Provisioning: Not Supported 00:07:45.191 Per-NS Atomic Units: No 00:07:45.191 Maximum Single Source Range Length: 128 00:07:45.191 Maximum Copy Length: 128 00:07:45.191 Maximum Source Range Count: 128 00:07:45.191 NGUID/EUI64 Never Reused: No 00:07:45.191 Namespace Write Protected: No 00:07:45.191 Number of LBA Formats: 8 00:07:45.191 Current LBA Format: LBA Format #04 00:07:45.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.191 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.191 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.191 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.191 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.191 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.191 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.191 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.191 00:07:45.191 NVM Specific Namespace Data 00:07:45.191 =========================== 00:07:45.191 Logical Block Storage Tag Mask: 0 00:07:45.191 Protection Information Capabilities: 00:07:45.191 16b Guard Protection Information Storage Tag Support: No 00:07:45.191 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.191 Storage Tag Check Read Support: No 00:07:45.191 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.191 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:45.191 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:45.454 ===================================================== 00:07:45.454 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.454 ===================================================== 00:07:45.454 Controller Capabilities/Features 00:07:45.454 ================================ 00:07:45.454 Vendor ID: 1b36 00:07:45.454 Subsystem Vendor ID: 1af4 00:07:45.454 Serial Number: 12340 00:07:45.454 Model Number: QEMU NVMe Ctrl 00:07:45.454 Firmware Version: 8.0.0 00:07:45.454 Recommended Arb Burst: 6 00:07:45.454 IEEE OUI Identifier: 00 54 52 00:07:45.454 Multi-path I/O 00:07:45.454 May have multiple subsystem ports: No 00:07:45.454 May have multiple controllers: No 00:07:45.454 Associated with SR-IOV VF: No 00:07:45.454 Max Data Transfer Size: 524288 00:07:45.454 Max Number of Namespaces: 256 00:07:45.454 Max Number of I/O Queues: 64 00:07:45.454 NVMe Specification Version (VS): 1.4 00:07:45.454 NVMe Specification Version (Identify): 1.4 00:07:45.454 Maximum Queue Entries: 2048 00:07:45.454 Contiguous Queues Required: Yes 00:07:45.454 Arbitration Mechanisms Supported 00:07:45.454 Weighted Round Robin: Not Supported 00:07:45.454 Vendor Specific: Not Supported 00:07:45.454 Reset Timeout: 7500 ms 00:07:45.454 Doorbell Stride: 4 bytes 00:07:45.454 NVM Subsystem Reset: Not Supported 00:07:45.454 Command Sets Supported 00:07:45.454 NVM Command Set: Supported 00:07:45.454 Boot Partition: Not Supported 00:07:45.454 Memory Page Size Minimum: 4096 bytes 00:07:45.454 Memory Page Size Maximum: 65536 bytes 00:07:45.454 Persistent Memory Region: Not Supported 00:07:45.454 Optional Asynchronous Events Supported 00:07:45.454 Namespace Attribute Notices: Supported 00:07:45.454 Firmware Activation Notices: Not Supported 00:07:45.454 ANA Change Notices: Not Supported 00:07:45.454 PLE Aggregate Log Change Notices: Not Supported 00:07:45.454 LBA Status Info Alert Notices: Not Supported 00:07:45.454 EGE Aggregate Log Change Notices: Not Supported 00:07:45.454 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.454 Zone Descriptor Change Notices: Not Supported 00:07:45.454 Discovery Log Change Notices: Not Supported 00:07:45.454 Controller Attributes 00:07:45.454 128-bit Host Identifier: Not Supported 00:07:45.454 Non-Operational Permissive Mode: Not Supported 00:07:45.454 NVM Sets: Not Supported 00:07:45.454 Read Recovery Levels: Not Supported 00:07:45.454 Endurance Groups: Not Supported 00:07:45.454 Predictable Latency Mode: Not Supported 00:07:45.454 Traffic Based Keep ALive: Not Supported 00:07:45.454 Namespace Granularity: Not Supported 00:07:45.454 SQ Associations: Not Supported 00:07:45.454 UUID List: Not Supported 00:07:45.454 Multi-Domain Subsystem: Not Supported 00:07:45.454 Fixed Capacity Management: Not Supported 00:07:45.454 Variable Capacity Management: Not Supported 00:07:45.454 Delete Endurance Group: Not Supported 00:07:45.454 Delete NVM Set: Not Supported 00:07:45.454 Extended LBA Formats Supported: Supported 00:07:45.454 Flexible Data Placement Supported: Not Supported 00:07:45.454 00:07:45.454 Controller Memory Buffer Support 00:07:45.454 ================================ 00:07:45.454 Supported: No 00:07:45.454 00:07:45.454 Persistent Memory Region Support 00:07:45.454 ================================ 00:07:45.454 Supported: No 00:07:45.454 00:07:45.454 Admin Command Set Attributes 00:07:45.454 ============================ 00:07:45.454 Security Send/Receive: Not Supported 00:07:45.454 Format NVM: Supported 00:07:45.454 Firmware Activate/Download: Not Supported 00:07:45.454 Namespace Management: Supported 00:07:45.454 Device Self-Test: Not Supported 00:07:45.454 Directives: Supported 00:07:45.454 NVMe-MI: Not Supported 00:07:45.454 Virtualization Management: Not Supported 00:07:45.454 Doorbell Buffer Config: Supported 00:07:45.454 Get LBA Status Capability: Not Supported 00:07:45.454 Command & Feature Lockdown Capability: Not Supported 00:07:45.454 Abort Command Limit: 4 00:07:45.454 Async Event Request Limit: 4 00:07:45.454 Number of Firmware Slots: N/A 00:07:45.454 Firmware Slot 1 Read-Only: N/A 00:07:45.454 Firmware Activation Without Reset: N/A 00:07:45.454 Multiple Update Detection Support: N/A 00:07:45.454 Firmware Update Granularity: No Information Provided 00:07:45.454 Per-Namespace SMART Log: Yes 00:07:45.454 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.454 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:45.454 Command Effects Log Page: Supported 00:07:45.454 Get Log Page Extended Data: Supported 00:07:45.454 Telemetry Log Pages: Not Supported 00:07:45.454 Persistent Event Log Pages: Not Supported 00:07:45.454 Supported Log Pages Log Page: May Support 00:07:45.454 Commands Supported & Effects Log Page: Not Supported 00:07:45.455 Feature Identifiers & Effects Log Page:May Support 00:07:45.455 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.455 Data Area 4 for Telemetry Log: Not Supported 00:07:45.455 Error Log Page Entries Supported: 1 00:07:45.455 Keep Alive: Not Supported 00:07:45.455 00:07:45.455 NVM Command Set Attributes 00:07:45.455 ========================== 00:07:45.455 Submission Queue Entry Size 00:07:45.455 Max: 64 00:07:45.455 Min: 64 00:07:45.455 Completion Queue Entry Size 00:07:45.455 Max: 16 00:07:45.455 Min: 16 00:07:45.455 Number of Namespaces: 256 00:07:45.455 Compare Command: Supported 00:07:45.455 Write Uncorrectable Command: Not Supported 00:07:45.455 Dataset Management Command: Supported 00:07:45.455 Write Zeroes Command: Supported 00:07:45.455 Set Features Save Field: Supported 00:07:45.455 Reservations: Not Supported 00:07:45.455 Timestamp: Supported 00:07:45.455 Copy: Supported 00:07:45.455 Volatile Write Cache: Present 00:07:45.455 Atomic Write Unit (Normal): 1 00:07:45.455 Atomic Write Unit (PFail): 1 00:07:45.455 Atomic Compare & Write Unit: 1 00:07:45.455 Fused Compare & Write: Not Supported 00:07:45.455 Scatter-Gather List 00:07:45.455 SGL Command Set: Supported 00:07:45.455 SGL Keyed: Not Supported 00:07:45.455 SGL Bit Bucket Descriptor: Not Supported 00:07:45.455 SGL Metadata Pointer: Not Supported 00:07:45.455 Oversized SGL: Not Supported 00:07:45.455 SGL Metadata Address: Not Supported 00:07:45.455 SGL Offset: Not Supported 00:07:45.455 Transport SGL Data Block: Not Supported 00:07:45.455 Replay Protected Memory Block: Not Supported 00:07:45.455 00:07:45.455 Firmware Slot Information 00:07:45.455 ========================= 00:07:45.455 Active slot: 1 00:07:45.455 Slot 1 Firmware Revision: 1.0 00:07:45.455 00:07:45.455 00:07:45.455 Commands Supported and Effects 00:07:45.455 ============================== 00:07:45.455 Admin Commands 00:07:45.455 -------------- 00:07:45.455 Delete I/O Submission Queue (00h): Supported 00:07:45.455 Create I/O Submission Queue (01h): Supported 00:07:45.455 Get Log Page (02h): Supported 00:07:45.455 Delete I/O Completion Queue (04h): Supported 00:07:45.455 Create I/O Completion Queue (05h): Supported 00:07:45.455 Identify (06h): Supported 00:07:45.455 Abort (08h): Supported 00:07:45.455 Set Features (09h): Supported 00:07:45.455 Get Features (0Ah): Supported 00:07:45.455 Asynchronous Event Request (0Ch): Supported 00:07:45.455 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.455 Directive Send (19h): Supported 00:07:45.455 Directive Receive (1Ah): Supported 00:07:45.455 Virtualization Management (1Ch): Supported 00:07:45.455 Doorbell Buffer Config (7Ch): Supported 00:07:45.455 Format NVM (80h): Supported LBA-Change 00:07:45.455 I/O Commands 00:07:45.455 ------------ 00:07:45.455 Flush (00h): Supported LBA-Change 00:07:45.455 Write (01h): Supported LBA-Change 00:07:45.455 Read (02h): Supported 00:07:45.455 Compare (05h): Supported 00:07:45.455 Write Zeroes (08h): Supported LBA-Change 00:07:45.455 Dataset Management (09h): Supported LBA-Change 00:07:45.455 Unknown (0Ch): Supported 00:07:45.455 Unknown (12h): Supported 00:07:45.455 Copy (19h): Supported LBA-Change 00:07:45.455 Unknown (1Dh): Supported LBA-Change 00:07:45.455 00:07:45.455 Error Log 00:07:45.455 ========= 00:07:45.455 00:07:45.455 Arbitration 00:07:45.455 =========== 00:07:45.455 Arbitration Burst: no limit 00:07:45.455 00:07:45.455 Power Management 00:07:45.455 ================ 00:07:45.455 Number of Power States: 1 00:07:45.455 Current Power State: Power State #0 00:07:45.455 Power State #0: 00:07:45.455 Max Power: 25.00 W 00:07:45.455 Non-Operational State: Operational 00:07:45.455 Entry Latency: 16 microseconds 00:07:45.455 Exit Latency: 4 microseconds 00:07:45.455 Relative Read Throughput: 0 00:07:45.455 Relative Read Latency: 0 00:07:45.455 Relative Write Throughput: 0 00:07:45.455 Relative Write Latency: 0 00:07:45.455 Idle Power: Not Reported 00:07:45.455 Active Power: Not Reported 00:07:45.455 Non-Operational Permissive Mode: Not Supported 00:07:45.455 00:07:45.455 Health Information 00:07:45.455 ================== 00:07:45.455 Critical Warnings: 00:07:45.455 Available Spare Space: OK 00:07:45.455 Temperature: OK 00:07:45.455 Device Reliability: OK 00:07:45.455 Read Only: No 00:07:45.455 Volatile Memory Backup: OK 00:07:45.455 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.455 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.455 Available Spare: 0% 00:07:45.455 Available Spare Threshold: 0% 00:07:45.455 Life Percentage Used: 0% 00:07:45.455 Data Units Read: 636 00:07:45.455 Data Units Written: 565 00:07:45.455 Host Read Commands: 34271 00:07:45.455 Host Write Commands: 34057 00:07:45.455 Controller Busy Time: 0 minutes 00:07:45.455 Power Cycles: 0 00:07:45.455 Power On Hours: 0 hours 00:07:45.455 Unsafe Shutdowns: 0 00:07:45.455 Unrecoverable Media Errors: 0 00:07:45.455 Lifetime Error Log Entries: 0 00:07:45.455 Warning Temperature Time: 0 minutes 00:07:45.455 Critical Temperature Time: 0 minutes 00:07:45.455 00:07:45.455 Number of Queues 00:07:45.455 ================ 00:07:45.455 Number of I/O Submission Queues: 64 00:07:45.455 Number of I/O Completion Queues: 64 00:07:45.455 00:07:45.455 ZNS Specific Controller Data 00:07:45.455 ============================ 00:07:45.455 Zone Append Size Limit: 0 00:07:45.455 00:07:45.455 00:07:45.455 Active Namespaces 00:07:45.455 ================= 00:07:45.455 Namespace ID:1 00:07:45.455 Error Recovery Timeout: Unlimited 00:07:45.455 Command Set Identifier: NVM (00h) 00:07:45.455 Deallocate: Supported 00:07:45.455 Deallocated/Unwritten Error: Supported 00:07:45.455 Deallocated Read Value: All 0x00 00:07:45.455 Deallocate in Write Zeroes: Not Supported 00:07:45.455 Deallocated Guard Field: 0xFFFF 00:07:45.455 Flush: Supported 00:07:45.455 Reservation: Not Supported 00:07:45.455 Metadata Transferred as: Separate Metadata Buffer 00:07:45.455 Namespace Sharing Capabilities: Private 00:07:45.455 Size (in LBAs): 1548666 (5GiB) 00:07:45.455 Capacity (in LBAs): 1548666 (5GiB) 00:07:45.455 Utilization (in LBAs): 1548666 (5GiB) 00:07:45.455 Thin Provisioning: Not Supported 00:07:45.455 Per-NS Atomic Units: No 00:07:45.455 Maximum Single Source Range Length: 128 00:07:45.455 Maximum Copy Length: 128 00:07:45.455 Maximum Source Range Count: 128 00:07:45.455 NGUID/EUI64 Never Reused: No 00:07:45.455 Namespace Write Protected: No 00:07:45.455 Number of LBA Formats: 8 00:07:45.455 Current LBA Format: LBA Format #07 00:07:45.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.456 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.456 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.456 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.456 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.456 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.456 00:07:45.456 NVM Specific Namespace Data 00:07:45.456 =========================== 00:07:45.456 Logical Block Storage Tag Mask: 0 00:07:45.456 Protection Information Capabilities: 00:07:45.456 16b Guard Protection Information Storage Tag Support: No 00:07:45.456 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.456 Storage Tag Check Read Support: No 00:07:45.456 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.456 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:45.456 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:45.718 ===================================================== 00:07:45.718 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.718 ===================================================== 00:07:45.718 Controller Capabilities/Features 00:07:45.718 ================================ 00:07:45.718 Vendor ID: 1b36 00:07:45.718 Subsystem Vendor ID: 1af4 00:07:45.718 Serial Number: 12341 00:07:45.718 Model Number: QEMU NVMe Ctrl 00:07:45.718 Firmware Version: 8.0.0 00:07:45.718 Recommended Arb Burst: 6 00:07:45.718 IEEE OUI Identifier: 00 54 52 00:07:45.718 Multi-path I/O 00:07:45.718 May have multiple subsystem ports: No 00:07:45.718 May have multiple controllers: No 00:07:45.718 Associated with SR-IOV VF: No 00:07:45.718 Max Data Transfer Size: 524288 00:07:45.718 Max Number of Namespaces: 256 00:07:45.718 Max Number of I/O Queues: 64 00:07:45.718 NVMe Specification Version (VS): 1.4 00:07:45.718 NVMe Specification Version (Identify): 1.4 00:07:45.718 Maximum Queue Entries: 2048 00:07:45.718 Contiguous Queues Required: Yes 00:07:45.718 Arbitration Mechanisms Supported 00:07:45.718 Weighted Round Robin: Not Supported 00:07:45.718 Vendor Specific: Not Supported 00:07:45.718 Reset Timeout: 7500 ms 00:07:45.718 Doorbell Stride: 4 bytes 00:07:45.718 NVM Subsystem Reset: Not Supported 00:07:45.718 Command Sets Supported 00:07:45.718 NVM Command Set: Supported 00:07:45.718 Boot Partition: Not Supported 00:07:45.718 Memory Page Size Minimum: 4096 bytes 00:07:45.718 Memory Page Size Maximum: 65536 bytes 00:07:45.718 Persistent Memory Region: Not Supported 00:07:45.718 Optional Asynchronous Events Supported 00:07:45.718 Namespace Attribute Notices: Supported 00:07:45.718 Firmware Activation Notices: Not Supported 00:07:45.718 ANA Change Notices: Not Supported 00:07:45.718 PLE Aggregate Log Change Notices: Not Supported 00:07:45.718 LBA Status Info Alert Notices: Not Supported 00:07:45.718 EGE Aggregate Log Change Notices: Not Supported 00:07:45.718 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.718 Zone Descriptor Change Notices: Not Supported 00:07:45.718 Discovery Log Change Notices: Not Supported 00:07:45.718 Controller Attributes 00:07:45.718 128-bit Host Identifier: Not Supported 00:07:45.718 Non-Operational Permissive Mode: Not Supported 00:07:45.718 NVM Sets: Not Supported 00:07:45.718 Read Recovery Levels: Not Supported 00:07:45.718 Endurance Groups: Not Supported 00:07:45.718 Predictable Latency Mode: Not Supported 00:07:45.718 Traffic Based Keep ALive: Not Supported 00:07:45.718 Namespace Granularity: Not Supported 00:07:45.718 SQ Associations: Not Supported 00:07:45.718 UUID List: Not Supported 00:07:45.718 Multi-Domain Subsystem: Not Supported 00:07:45.718 Fixed Capacity Management: Not Supported 00:07:45.718 Variable Capacity Management: Not Supported 00:07:45.718 Delete Endurance Group: Not Supported 00:07:45.719 Delete NVM Set: Not Supported 00:07:45.719 Extended LBA Formats Supported: Supported 00:07:45.719 Flexible Data Placement Supported: Not Supported 00:07:45.719 00:07:45.719 Controller Memory Buffer Support 00:07:45.719 ================================ 00:07:45.719 Supported: No 00:07:45.719 00:07:45.719 Persistent Memory Region Support 00:07:45.719 ================================ 00:07:45.719 Supported: No 00:07:45.719 00:07:45.719 Admin Command Set Attributes 00:07:45.719 ============================ 00:07:45.719 Security Send/Receive: Not Supported 00:07:45.719 Format NVM: Supported 00:07:45.719 Firmware Activate/Download: Not Supported 00:07:45.719 Namespace Management: Supported 00:07:45.719 Device Self-Test: Not Supported 00:07:45.719 Directives: Supported 00:07:45.719 NVMe-MI: Not Supported 00:07:45.719 Virtualization Management: Not Supported 00:07:45.719 Doorbell Buffer Config: Supported 00:07:45.719 Get LBA Status Capability: Not Supported 00:07:45.719 Command & Feature Lockdown Capability: Not Supported 00:07:45.719 Abort Command Limit: 4 00:07:45.719 Async Event Request Limit: 4 00:07:45.719 Number of Firmware Slots: N/A 00:07:45.719 Firmware Slot 1 Read-Only: N/A 00:07:45.719 Firmware Activation Without Reset: N/A 00:07:45.719 Multiple Update Detection Support: N/A 00:07:45.719 Firmware Update Granularity: No Information Provided 00:07:45.719 Per-Namespace SMART Log: Yes 00:07:45.719 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.719 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:45.719 Command Effects Log Page: Supported 00:07:45.719 Get Log Page Extended Data: Supported 00:07:45.719 Telemetry Log Pages: Not Supported 00:07:45.719 Persistent Event Log Pages: Not Supported 00:07:45.719 Supported Log Pages Log Page: May Support 00:07:45.719 Commands Supported & Effects Log Page: Not Supported 00:07:45.719 Feature Identifiers & Effects Log Page:May Support 00:07:45.719 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.719 Data Area 4 for Telemetry Log: Not Supported 00:07:45.719 Error Log Page Entries Supported: 1 00:07:45.719 Keep Alive: Not Supported 00:07:45.719 00:07:45.719 NVM Command Set Attributes 00:07:45.719 ========================== 00:07:45.719 Submission Queue Entry Size 00:07:45.719 Max: 64 00:07:45.719 Min: 64 00:07:45.719 Completion Queue Entry Size 00:07:45.719 Max: 16 00:07:45.719 Min: 16 00:07:45.719 Number of Namespaces: 256 00:07:45.719 Compare Command: Supported 00:07:45.719 Write Uncorrectable Command: Not Supported 00:07:45.719 Dataset Management Command: Supported 00:07:45.719 Write Zeroes Command: Supported 00:07:45.719 Set Features Save Field: Supported 00:07:45.719 Reservations: Not Supported 00:07:45.719 Timestamp: Supported 00:07:45.719 Copy: Supported 00:07:45.719 Volatile Write Cache: Present 00:07:45.719 Atomic Write Unit (Normal): 1 00:07:45.719 Atomic Write Unit (PFail): 1 00:07:45.719 Atomic Compare & Write Unit: 1 00:07:45.719 Fused Compare & Write: Not Supported 00:07:45.719 Scatter-Gather List 00:07:45.719 SGL Command Set: Supported 00:07:45.719 SGL Keyed: Not Supported 00:07:45.719 SGL Bit Bucket Descriptor: Not Supported 00:07:45.719 SGL Metadata Pointer: Not Supported 00:07:45.719 Oversized SGL: Not Supported 00:07:45.719 SGL Metadata Address: Not Supported 00:07:45.719 SGL Offset: Not Supported 00:07:45.719 Transport SGL Data Block: Not Supported 00:07:45.719 Replay Protected Memory Block: Not Supported 00:07:45.719 00:07:45.719 Firmware Slot Information 00:07:45.719 ========================= 00:07:45.719 Active slot: 1 00:07:45.719 Slot 1 Firmware Revision: 1.0 00:07:45.719 00:07:45.719 00:07:45.719 Commands Supported and Effects 00:07:45.719 ============================== 00:07:45.719 Admin Commands 00:07:45.719 -------------- 00:07:45.719 Delete I/O Submission Queue (00h): Supported 00:07:45.719 Create I/O Submission Queue (01h): Supported 00:07:45.719 Get Log Page (02h): Supported 00:07:45.719 Delete I/O Completion Queue (04h): Supported 00:07:45.719 Create I/O Completion Queue (05h): Supported 00:07:45.719 Identify (06h): Supported 00:07:45.719 Abort (08h): Supported 00:07:45.719 Set Features (09h): Supported 00:07:45.719 Get Features (0Ah): Supported 00:07:45.719 Asynchronous Event Request (0Ch): Supported 00:07:45.719 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.719 Directive Send (19h): Supported 00:07:45.719 Directive Receive (1Ah): Supported 00:07:45.719 Virtualization Management (1Ch): Supported 00:07:45.719 Doorbell Buffer Config (7Ch): Supported 00:07:45.719 Format NVM (80h): Supported LBA-Change 00:07:45.719 I/O Commands 00:07:45.719 ------------ 00:07:45.719 Flush (00h): Supported LBA-Change 00:07:45.719 Write (01h): Supported LBA-Change 00:07:45.719 Read (02h): Supported 00:07:45.719 Compare (05h): Supported 00:07:45.719 Write Zeroes (08h): Supported LBA-Change 00:07:45.719 Dataset Management (09h): Supported LBA-Change 00:07:45.719 Unknown (0Ch): Supported 00:07:45.719 Unknown (12h): Supported 00:07:45.719 Copy (19h): Supported LBA-Change 00:07:45.719 Unknown (1Dh): Supported LBA-Change 00:07:45.719 00:07:45.719 Error Log 00:07:45.719 ========= 00:07:45.719 00:07:45.719 Arbitration 00:07:45.719 =========== 00:07:45.719 Arbitration Burst: no limit 00:07:45.719 00:07:45.719 Power Management 00:07:45.719 ================ 00:07:45.719 Number of Power States: 1 00:07:45.719 Current Power State: Power State #0 00:07:45.719 Power State #0: 00:07:45.719 Max Power: 25.00 W 00:07:45.719 Non-Operational State: Operational 00:07:45.719 Entry Latency: 16 microseconds 00:07:45.719 Exit Latency: 4 microseconds 00:07:45.719 Relative Read Throughput: 0 00:07:45.719 Relative Read Latency: 0 00:07:45.719 Relative Write Throughput: 0 00:07:45.719 Relative Write Latency: 0 00:07:45.719 Idle Power: Not Reported 00:07:45.719 Active Power: Not Reported 00:07:45.719 Non-Operational Permissive Mode: Not Supported 00:07:45.719 00:07:45.719 Health Information 00:07:45.719 ================== 00:07:45.719 Critical Warnings: 00:07:45.719 Available Spare Space: OK 00:07:45.720 Temperature: OK 00:07:45.720 Device Reliability: OK 00:07:45.720 Read Only: No 00:07:45.720 Volatile Memory Backup: OK 00:07:45.720 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.720 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.720 Available Spare: 0% 00:07:45.720 Available Spare Threshold: 0% 00:07:45.720 Life Percentage Used: 0% 00:07:45.720 Data Units Read: 958 00:07:45.720 Data Units Written: 831 00:07:45.720 Host Read Commands: 50027 00:07:45.720 Host Write Commands: 48924 00:07:45.720 Controller Busy Time: 0 minutes 00:07:45.720 Power Cycles: 0 00:07:45.720 Power On Hours: 0 hours 00:07:45.720 Unsafe Shutdowns: 0 00:07:45.720 Unrecoverable Media Errors: 0 00:07:45.720 Lifetime Error Log Entries: 0 00:07:45.720 Warning Temperature Time: 0 minutes 00:07:45.720 Critical Temperature Time: 0 minutes 00:07:45.720 00:07:45.720 Number of Queues 00:07:45.720 ================ 00:07:45.720 Number of I/O Submission Queues: 64 00:07:45.720 Number of I/O Completion Queues: 64 00:07:45.720 00:07:45.720 ZNS Specific Controller Data 00:07:45.720 ============================ 00:07:45.720 Zone Append Size Limit: 0 00:07:45.720 00:07:45.720 00:07:45.720 Active Namespaces 00:07:45.720 ================= 00:07:45.720 Namespace ID:1 00:07:45.720 Error Recovery Timeout: Unlimited 00:07:45.720 Command Set Identifier: NVM (00h) 00:07:45.720 Deallocate: Supported 00:07:45.720 Deallocated/Unwritten Error: Supported 00:07:45.720 Deallocated Read Value: All 0x00 00:07:45.720 Deallocate in Write Zeroes: Not Supported 00:07:45.720 Deallocated Guard Field: 0xFFFF 00:07:45.720 Flush: Supported 00:07:45.720 Reservation: Not Supported 00:07:45.720 Namespace Sharing Capabilities: Private 00:07:45.720 Size (in LBAs): 1310720 (5GiB) 00:07:45.720 Capacity (in LBAs): 1310720 (5GiB) 00:07:45.720 Utilization (in LBAs): 1310720 (5GiB) 00:07:45.720 Thin Provisioning: Not Supported 00:07:45.720 Per-NS Atomic Units: No 00:07:45.720 Maximum Single Source Range Length: 128 00:07:45.720 Maximum Copy Length: 128 00:07:45.720 Maximum Source Range Count: 128 00:07:45.720 NGUID/EUI64 Never Reused: No 00:07:45.720 Namespace Write Protected: No 00:07:45.720 Number of LBA Formats: 8 00:07:45.720 Current LBA Format: LBA Format #04 00:07:45.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.720 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.720 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.720 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.720 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.720 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.720 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.720 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.720 00:07:45.720 NVM Specific Namespace Data 00:07:45.720 =========================== 00:07:45.720 Logical Block Storage Tag Mask: 0 00:07:45.720 Protection Information Capabilities: 00:07:45.720 16b Guard Protection Information Storage Tag Support: No 00:07:45.720 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.720 Storage Tag Check Read Support: No 00:07:45.720 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.720 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:45.720 23:09:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:45.720 ===================================================== 00:07:45.720 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.720 ===================================================== 00:07:45.720 Controller Capabilities/Features 00:07:45.720 ================================ 00:07:45.720 Vendor ID: 1b36 00:07:45.720 Subsystem Vendor ID: 1af4 00:07:45.720 Serial Number: 12342 00:07:45.720 Model Number: QEMU NVMe Ctrl 00:07:45.720 Firmware Version: 8.0.0 00:07:45.720 Recommended Arb Burst: 6 00:07:45.720 IEEE OUI Identifier: 00 54 52 00:07:45.720 Multi-path I/O 00:07:45.720 May have multiple subsystem ports: No 00:07:45.720 May have multiple controllers: No 00:07:45.720 Associated with SR-IOV VF: No 00:07:45.720 Max Data Transfer Size: 524288 00:07:45.720 Max Number of Namespaces: 256 00:07:45.720 Max Number of I/O Queues: 64 00:07:45.720 NVMe Specification Version (VS): 1.4 00:07:45.720 NVMe Specification Version (Identify): 1.4 00:07:45.720 Maximum Queue Entries: 2048 00:07:45.720 Contiguous Queues Required: Yes 00:07:45.720 Arbitration Mechanisms Supported 00:07:45.720 Weighted Round Robin: Not Supported 00:07:45.720 Vendor Specific: Not Supported 00:07:45.720 Reset Timeout: 7500 ms 00:07:45.720 Doorbell Stride: 4 bytes 00:07:45.720 NVM Subsystem Reset: Not Supported 00:07:45.720 Command Sets Supported 00:07:45.720 NVM Command Set: Supported 00:07:45.720 Boot Partition: Not Supported 00:07:45.720 Memory Page Size Minimum: 4096 bytes 00:07:45.720 Memory Page Size Maximum: 65536 bytes 00:07:45.720 Persistent Memory Region: Not Supported 00:07:45.720 Optional Asynchronous Events Supported 00:07:45.720 Namespace Attribute Notices: Supported 00:07:45.720 Firmware Activation Notices: Not Supported 00:07:45.720 ANA Change Notices: Not Supported 00:07:45.720 PLE Aggregate Log Change Notices: Not Supported 00:07:45.720 LBA Status Info Alert Notices: Not Supported 00:07:45.720 EGE Aggregate Log Change Notices: Not Supported 00:07:45.720 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.720 Zone Descriptor Change Notices: Not Supported 00:07:45.720 Discovery Log Change Notices: Not Supported 00:07:45.720 Controller Attributes 00:07:45.720 128-bit Host Identifier: Not Supported 00:07:45.720 Non-Operational Permissive Mode: Not Supported 00:07:45.720 NVM Sets: Not Supported 00:07:45.720 Read Recovery Levels: Not Supported 00:07:45.720 Endurance Groups: Not Supported 00:07:45.720 Predictable Latency Mode: Not Supported 00:07:45.720 Traffic Based Keep ALive: Not Supported 00:07:45.720 Namespace Granularity: Not Supported 00:07:45.720 SQ Associations: Not Supported 00:07:45.720 UUID List: Not Supported 00:07:45.720 Multi-Domain Subsystem: Not Supported 00:07:45.720 Fixed Capacity Management: Not Supported 00:07:45.720 Variable Capacity Management: Not Supported 00:07:45.720 Delete Endurance Group: Not Supported 00:07:45.720 Delete NVM Set: Not Supported 00:07:45.720 Extended LBA Formats Supported: Supported 00:07:45.720 Flexible Data Placement Supported: Not Supported 00:07:45.720 00:07:45.720 Controller Memory Buffer Support 00:07:45.720 ================================ 00:07:45.720 Supported: No 00:07:45.720 00:07:45.720 Persistent Memory Region Support 00:07:45.720 ================================ 00:07:45.720 Supported: No 00:07:45.721 00:07:45.721 Admin Command Set Attributes 00:07:45.721 ============================ 00:07:45.721 Security Send/Receive: Not Supported 00:07:45.721 Format NVM: Supported 00:07:45.721 Firmware Activate/Download: Not Supported 00:07:45.721 Namespace Management: Supported 00:07:45.721 Device Self-Test: Not Supported 00:07:45.721 Directives: Supported 00:07:45.721 NVMe-MI: Not Supported 00:07:45.721 Virtualization Management: Not Supported 00:07:45.721 Doorbell Buffer Config: Supported 00:07:45.721 Get LBA Status Capability: Not Supported 00:07:45.721 Command & Feature Lockdown Capability: Not Supported 00:07:45.721 Abort Command Limit: 4 00:07:45.721 Async Event Request Limit: 4 00:07:45.721 Number of Firmware Slots: N/A 00:07:45.721 Firmware Slot 1 Read-Only: N/A 00:07:45.721 Firmware Activation Without Reset: N/A 00:07:45.721 Multiple Update Detection Support: N/A 00:07:45.721 Firmware Update Granularity: No Information Provided 00:07:45.721 Per-Namespace SMART Log: Yes 00:07:45.721 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.721 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:45.721 Command Effects Log Page: Supported 00:07:45.721 Get Log Page Extended Data: Supported 00:07:45.721 Telemetry Log Pages: Not Supported 00:07:45.721 Persistent Event Log Pages: Not Supported 00:07:45.721 Supported Log Pages Log Page: May Support 00:07:45.721 Commands Supported & Effects Log Page: Not Supported 00:07:45.721 Feature Identifiers & Effects Log Page:May Support 00:07:45.721 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.721 Data Area 4 for Telemetry Log: Not Supported 00:07:45.721 Error Log Page Entries Supported: 1 00:07:45.721 Keep Alive: Not Supported 00:07:45.721 00:07:45.721 NVM Command Set Attributes 00:07:45.721 ========================== 00:07:45.721 Submission Queue Entry Size 00:07:45.721 Max: 64 00:07:45.721 Min: 64 00:07:45.721 Completion Queue Entry Size 00:07:45.721 Max: 16 00:07:45.721 Min: 16 00:07:45.721 Number of Namespaces: 256 00:07:45.721 Compare Command: Supported 00:07:45.721 Write Uncorrectable Command: Not Supported 00:07:45.721 Dataset Management Command: Supported 00:07:45.721 Write Zeroes Command: Supported 00:07:45.721 Set Features Save Field: Supported 00:07:45.721 Reservations: Not Supported 00:07:45.721 Timestamp: Supported 00:07:45.721 Copy: Supported 00:07:45.721 Volatile Write Cache: Present 00:07:45.721 Atomic Write Unit (Normal): 1 00:07:45.721 Atomic Write Unit (PFail): 1 00:07:45.721 Atomic Compare & Write Unit: 1 00:07:45.721 Fused Compare & Write: Not Supported 00:07:45.721 Scatter-Gather List 00:07:45.721 SGL Command Set: Supported 00:07:45.721 SGL Keyed: Not Supported 00:07:45.721 SGL Bit Bucket Descriptor: Not Supported 00:07:45.721 SGL Metadata Pointer: Not Supported 00:07:45.721 Oversized SGL: Not Supported 00:07:45.721 SGL Metadata Address: Not Supported 00:07:45.721 SGL Offset: Not Supported 00:07:45.721 Transport SGL Data Block: Not Supported 00:07:45.721 Replay Protected Memory Block: Not Supported 00:07:45.721 00:07:45.721 Firmware Slot Information 00:07:45.721 ========================= 00:07:45.721 Active slot: 1 00:07:45.721 Slot 1 Firmware Revision: 1.0 00:07:45.721 00:07:45.721 00:07:45.721 Commands Supported and Effects 00:07:45.721 ============================== 00:07:45.721 Admin Commands 00:07:45.721 -------------- 00:07:45.721 Delete I/O Submission Queue (00h): Supported 00:07:45.721 Create I/O Submission Queue (01h): Supported 00:07:45.721 Get Log Page (02h): Supported 00:07:45.721 Delete I/O Completion Queue (04h): Supported 00:07:45.721 Create I/O Completion Queue (05h): Supported 00:07:45.721 Identify (06h): Supported 00:07:45.721 Abort (08h): Supported 00:07:45.721 Set Features (09h): Supported 00:07:45.721 Get Features (0Ah): Supported 00:07:45.721 Asynchronous Event Request (0Ch): Supported 00:07:45.721 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.721 Directive Send (19h): Supported 00:07:45.721 Directive Receive (1Ah): Supported 00:07:45.721 Virtualization Management (1Ch): Supported 00:07:45.721 Doorbell Buffer Config (7Ch): Supported 00:07:45.721 Format NVM (80h): Supported LBA-Change 00:07:45.721 I/O Commands 00:07:45.721 ------------ 00:07:45.721 Flush (00h): Supported LBA-Change 00:07:45.721 Write (01h): Supported LBA-Change 00:07:45.721 Read (02h): Supported 00:07:45.721 Compare (05h): Supported 00:07:45.721 Write Zeroes (08h): Supported LBA-Change 00:07:45.721 Dataset Management (09h): Supported LBA-Change 00:07:45.721 Unknown (0Ch): Supported 00:07:45.721 Unknown (12h): Supported 00:07:45.721 Copy (19h): Supported LBA-Change 00:07:45.721 Unknown (1Dh): Supported LBA-Change 00:07:45.721 00:07:45.721 Error Log 00:07:45.721 ========= 00:07:45.721 00:07:45.721 Arbitration 00:07:45.721 =========== 00:07:45.721 Arbitration Burst: no limit 00:07:45.721 00:07:45.721 Power Management 00:07:45.721 ================ 00:07:45.721 Number of Power States: 1 00:07:45.721 Current Power State: Power State #0 00:07:45.721 Power State #0: 00:07:45.721 Max Power: 25.00 W 00:07:45.721 Non-Operational State: Operational 00:07:45.721 Entry Latency: 16 microseconds 00:07:45.721 Exit Latency: 4 microseconds 00:07:45.721 Relative Read Throughput: 0 00:07:45.721 Relative Read Latency: 0 00:07:45.721 Relative Write Throughput: 0 00:07:45.721 Relative Write Latency: 0 00:07:45.721 Idle Power: Not Reported 00:07:45.721 Active Power: Not Reported 00:07:45.721 Non-Operational Permissive Mode: Not Supported 00:07:45.721 00:07:45.721 Health Information 00:07:45.721 ================== 00:07:45.721 Critical Warnings: 00:07:45.721 Available Spare Space: OK 00:07:45.721 Temperature: OK 00:07:45.721 Device Reliability: OK 00:07:45.721 Read Only: No 00:07:45.721 Volatile Memory Backup: OK 00:07:45.721 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.721 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.721 Available Spare: 0% 00:07:45.721 Available Spare Threshold: 0% 00:07:45.721 Life Percentage Used: 0% 00:07:45.721 Data Units Read: 2097 00:07:45.721 Data Units Written: 1884 00:07:45.721 Host Read Commands: 104939 00:07:45.721 Host Write Commands: 103211 00:07:45.721 Controller Busy Time: 0 minutes 00:07:45.721 Power Cycles: 0 00:07:45.721 Power On Hours: 0 hours 00:07:45.721 Unsafe Shutdowns: 0 00:07:45.721 Unrecoverable Media Errors: 0 00:07:45.721 Lifetime Error Log Entries: 0 00:07:45.721 Warning Temperature Time: 0 minutes 00:07:45.721 Critical Temperature Time: 0 minutes 00:07:45.721 00:07:45.721 Number of Queues 00:07:45.721 ================ 00:07:45.721 Number of I/O Submission Queues: 64 00:07:45.721 Number of I/O Completion Queues: 64 00:07:45.721 00:07:45.721 ZNS Specific Controller Data 00:07:45.721 ============================ 00:07:45.721 Zone Append Size Limit: 0 00:07:45.721 00:07:45.721 00:07:45.721 Active Namespaces 00:07:45.721 ================= 00:07:45.721 Namespace ID:1 00:07:45.722 Error Recovery Timeout: Unlimited 00:07:45.722 Command Set Identifier: NVM (00h) 00:07:45.722 Deallocate: Supported 00:07:45.722 Deallocated/Unwritten Error: Supported 00:07:45.722 Deallocated Read Value: All 0x00 00:07:45.722 Deallocate in Write Zeroes: Not Supported 00:07:45.722 Deallocated Guard Field: 0xFFFF 00:07:45.722 Flush: Supported 00:07:45.722 Reservation: Not Supported 00:07:45.722 Namespace Sharing Capabilities: Private 00:07:45.722 Size (in LBAs): 1048576 (4GiB) 00:07:45.722 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.722 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.722 Thin Provisioning: Not Supported 00:07:45.722 Per-NS Atomic Units: No 00:07:45.722 Maximum Single Source Range Length: 128 00:07:45.722 Maximum Copy Length: 128 00:07:45.722 Maximum Source Range Count: 128 00:07:45.722 NGUID/EUI64 Never Reused: No 00:07:45.722 Namespace Write Protected: No 00:07:45.722 Number of LBA Formats: 8 00:07:45.722 Current LBA Format: LBA Format #04 00:07:45.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.722 00:07:45.722 NVM Specific Namespace Data 00:07:45.722 =========================== 00:07:45.722 Logical Block Storage Tag Mask: 0 00:07:45.722 Protection Information Capabilities: 00:07:45.722 16b Guard Protection Information Storage Tag Support: No 00:07:45.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.722 Storage Tag Check Read Support: No 00:07:45.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Namespace ID:2 00:07:45.722 Error Recovery Timeout: Unlimited 00:07:45.722 Command Set Identifier: NVM (00h) 00:07:45.722 Deallocate: Supported 00:07:45.722 Deallocated/Unwritten Error: Supported 00:07:45.722 Deallocated Read Value: All 0x00 00:07:45.722 Deallocate in Write Zeroes: Not Supported 00:07:45.722 Deallocated Guard Field: 0xFFFF 00:07:45.722 Flush: Supported 00:07:45.722 Reservation: Not Supported 00:07:45.722 Namespace Sharing Capabilities: Private 00:07:45.722 Size (in LBAs): 1048576 (4GiB) 00:07:45.722 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.722 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.722 Thin Provisioning: Not Supported 00:07:45.722 Per-NS Atomic Units: No 00:07:45.722 Maximum Single Source Range Length: 128 00:07:45.722 Maximum Copy Length: 128 00:07:45.722 Maximum Source Range Count: 128 00:07:45.722 NGUID/EUI64 Never Reused: No 00:07:45.722 Namespace Write Protected: No 00:07:45.722 Number of LBA Formats: 8 00:07:45.722 Current LBA Format: LBA Format #04 00:07:45.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.722 00:07:45.722 NVM Specific Namespace Data 00:07:45.722 =========================== 00:07:45.722 Logical Block Storage Tag Mask: 0 00:07:45.722 Protection Information Capabilities: 00:07:45.722 16b Guard Protection Information Storage Tag Support: No 00:07:45.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.722 Storage Tag Check Read Support: No 00:07:45.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.722 Namespace ID:3 00:07:45.722 Error Recovery Timeout: Unlimited 00:07:45.722 Command Set Identifier: NVM (00h) 00:07:45.722 Deallocate: Supported 00:07:45.722 Deallocated/Unwritten Error: Supported 00:07:45.722 Deallocated Read Value: All 0x00 00:07:45.722 Deallocate in Write Zeroes: Not Supported 00:07:45.722 Deallocated Guard Field: 0xFFFF 00:07:45.722 Flush: Supported 00:07:45.722 Reservation: Not Supported 00:07:45.722 Namespace Sharing Capabilities: Private 00:07:45.722 Size (in LBAs): 1048576 (4GiB) 00:07:45.722 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.722 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.722 Thin Provisioning: Not Supported 00:07:45.722 Per-NS Atomic Units: No 00:07:45.722 Maximum Single Source Range Length: 128 00:07:45.722 Maximum Copy Length: 128 00:07:45.722 Maximum Source Range Count: 128 00:07:45.722 NGUID/EUI64 Never Reused: No 00:07:45.722 Namespace Write Protected: No 00:07:45.722 Number of LBA Formats: 8 00:07:45.722 Current LBA Format: LBA Format #04 00:07:45.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.723 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.723 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.723 00:07:45.723 NVM Specific Namespace Data 00:07:45.723 =========================== 00:07:45.723 Logical Block Storage Tag Mask: 0 00:07:45.723 Protection Information Capabilities: 00:07:45.723 16b Guard Protection Information Storage Tag Support: No 00:07:45.723 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.984 Storage Tag Check Read Support: No 00:07:45.984 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.984 23:09:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:45.984 23:09:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:45.984 ===================================================== 00:07:45.984 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.984 ===================================================== 00:07:45.984 Controller Capabilities/Features 00:07:45.984 ================================ 00:07:45.984 Vendor ID: 1b36 00:07:45.984 Subsystem Vendor ID: 1af4 00:07:45.984 Serial Number: 12343 00:07:45.984 Model Number: QEMU NVMe Ctrl 00:07:45.984 Firmware Version: 8.0.0 00:07:45.984 Recommended Arb Burst: 6 00:07:45.984 IEEE OUI Identifier: 00 54 52 00:07:45.984 Multi-path I/O 00:07:45.984 May have multiple subsystem ports: No 00:07:45.984 May have multiple controllers: Yes 00:07:45.984 Associated with SR-IOV VF: No 00:07:45.984 Max Data Transfer Size: 524288 00:07:45.984 Max Number of Namespaces: 256 00:07:45.984 Max Number of I/O Queues: 64 00:07:45.984 NVMe Specification Version (VS): 1.4 00:07:45.984 NVMe Specification Version (Identify): 1.4 00:07:45.984 Maximum Queue Entries: 2048 00:07:45.984 Contiguous Queues Required: Yes 00:07:45.984 Arbitration Mechanisms Supported 00:07:45.984 Weighted Round Robin: Not Supported 00:07:45.984 Vendor Specific: Not Supported 00:07:45.984 Reset Timeout: 7500 ms 00:07:45.984 Doorbell Stride: 4 bytes 00:07:45.984 NVM Subsystem Reset: Not Supported 00:07:45.984 Command Sets Supported 00:07:45.984 NVM Command Set: Supported 00:07:45.984 Boot Partition: Not Supported 00:07:45.985 Memory Page Size Minimum: 4096 bytes 00:07:45.985 Memory Page Size Maximum: 65536 bytes 00:07:45.985 Persistent Memory Region: Not Supported 00:07:45.985 Optional Asynchronous Events Supported 00:07:45.985 Namespace Attribute Notices: Supported 00:07:45.985 Firmware Activation Notices: Not Supported 00:07:45.985 ANA Change Notices: Not Supported 00:07:45.985 PLE Aggregate Log Change Notices: Not Supported 00:07:45.985 LBA Status Info Alert Notices: Not Supported 00:07:45.985 EGE Aggregate Log Change Notices: Not Supported 00:07:45.985 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.985 Zone Descriptor Change Notices: Not Supported 00:07:45.985 Discovery Log Change Notices: Not Supported 00:07:45.985 Controller Attributes 00:07:45.985 128-bit Host Identifier: Not Supported 00:07:45.985 Non-Operational Permissive Mode: Not Supported 00:07:45.985 NVM Sets: Not Supported 00:07:45.985 Read Recovery Levels: Not Supported 00:07:45.985 Endurance Groups: Supported 00:07:45.985 Predictable Latency Mode: Not Supported 00:07:45.985 Traffic Based Keep ALive: Not Supported 00:07:45.985 Namespace Granularity: Not Supported 00:07:45.985 SQ Associations: Not Supported 00:07:45.985 UUID List: Not Supported 00:07:45.985 Multi-Domain Subsystem: Not Supported 00:07:45.985 Fixed Capacity Management: Not Supported 00:07:45.985 Variable Capacity Management: Not Supported 00:07:45.985 Delete Endurance Group: Not Supported 00:07:45.985 Delete NVM Set: Not Supported 00:07:45.985 Extended LBA Formats Supported: Supported 00:07:45.985 Flexible Data Placement Supported: Supported 00:07:45.985 00:07:45.985 Controller Memory Buffer Support 00:07:45.985 ================================ 00:07:45.985 Supported: No 00:07:45.985 00:07:45.985 Persistent Memory Region Support 00:07:45.985 ================================ 00:07:45.985 Supported: No 00:07:45.985 00:07:45.985 Admin Command Set Attributes 00:07:45.985 ============================ 00:07:45.985 Security Send/Receive: Not Supported 00:07:45.985 Format NVM: Supported 00:07:45.985 Firmware Activate/Download: Not Supported 00:07:45.985 Namespace Management: Supported 00:07:45.985 Device Self-Test: Not Supported 00:07:45.985 Directives: Supported 00:07:45.985 NVMe-MI: Not Supported 00:07:45.985 Virtualization Management: Not Supported 00:07:45.985 Doorbell Buffer Config: Supported 00:07:45.985 Get LBA Status Capability: Not Supported 00:07:45.985 Command & Feature Lockdown Capability: Not Supported 00:07:45.985 Abort Command Limit: 4 00:07:45.985 Async Event Request Limit: 4 00:07:45.985 Number of Firmware Slots: N/A 00:07:45.985 Firmware Slot 1 Read-Only: N/A 00:07:45.985 Firmware Activation Without Reset: N/A 00:07:45.985 Multiple Update Detection Support: N/A 00:07:45.985 Firmware Update Granularity: No Information Provided 00:07:45.985 Per-Namespace SMART Log: Yes 00:07:45.985 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.985 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:45.985 Command Effects Log Page: Supported 00:07:45.985 Get Log Page Extended Data: Supported 00:07:45.985 Telemetry Log Pages: Not Supported 00:07:45.985 Persistent Event Log Pages: Not Supported 00:07:45.985 Supported Log Pages Log Page: May Support 00:07:45.985 Commands Supported & Effects Log Page: Not Supported 00:07:45.985 Feature Identifiers & Effects Log Page:May Support 00:07:45.985 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.985 Data Area 4 for Telemetry Log: Not Supported 00:07:45.985 Error Log Page Entries Supported: 1 00:07:45.985 Keep Alive: Not Supported 00:07:45.985 00:07:45.985 NVM Command Set Attributes 00:07:45.985 ========================== 00:07:45.985 Submission Queue Entry Size 00:07:45.985 Max: 64 00:07:45.985 Min: 64 00:07:45.985 Completion Queue Entry Size 00:07:45.985 Max: 16 00:07:45.985 Min: 16 00:07:45.985 Number of Namespaces: 256 00:07:45.985 Compare Command: Supported 00:07:45.985 Write Uncorrectable Command: Not Supported 00:07:45.985 Dataset Management Command: Supported 00:07:45.985 Write Zeroes Command: Supported 00:07:45.985 Set Features Save Field: Supported 00:07:45.985 Reservations: Not Supported 00:07:45.985 Timestamp: Supported 00:07:45.985 Copy: Supported 00:07:45.985 Volatile Write Cache: Present 00:07:45.985 Atomic Write Unit (Normal): 1 00:07:45.985 Atomic Write Unit (PFail): 1 00:07:45.985 Atomic Compare & Write Unit: 1 00:07:45.985 Fused Compare & Write: Not Supported 00:07:45.985 Scatter-Gather List 00:07:45.985 SGL Command Set: Supported 00:07:45.985 SGL Keyed: Not Supported 00:07:45.985 SGL Bit Bucket Descriptor: Not Supported 00:07:45.985 SGL Metadata Pointer: Not Supported 00:07:45.985 Oversized SGL: Not Supported 00:07:45.985 SGL Metadata Address: Not Supported 00:07:45.985 SGL Offset: Not Supported 00:07:45.985 Transport SGL Data Block: Not Supported 00:07:45.985 Replay Protected Memory Block: Not Supported 00:07:45.985 00:07:45.985 Firmware Slot Information 00:07:45.985 ========================= 00:07:45.985 Active slot: 1 00:07:45.985 Slot 1 Firmware Revision: 1.0 00:07:45.985 00:07:45.985 00:07:45.985 Commands Supported and Effects 00:07:45.985 ============================== 00:07:45.985 Admin Commands 00:07:45.985 -------------- 00:07:45.985 Delete I/O Submission Queue (00h): Supported 00:07:45.985 Create I/O Submission Queue (01h): Supported 00:07:45.985 Get Log Page (02h): Supported 00:07:45.985 Delete I/O Completion Queue (04h): Supported 00:07:45.985 Create I/O Completion Queue (05h): Supported 00:07:45.985 Identify (06h): Supported 00:07:45.985 Abort (08h): Supported 00:07:45.985 Set Features (09h): Supported 00:07:45.985 Get Features (0Ah): Supported 00:07:45.985 Asynchronous Event Request (0Ch): Supported 00:07:45.985 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.985 Directive Send (19h): Supported 00:07:45.985 Directive Receive (1Ah): Supported 00:07:45.985 Virtualization Management (1Ch): Supported 00:07:45.985 Doorbell Buffer Config (7Ch): Supported 00:07:45.985 Format NVM (80h): Supported LBA-Change 00:07:45.985 I/O Commands 00:07:45.985 ------------ 00:07:45.985 Flush (00h): Supported LBA-Change 00:07:45.985 Write (01h): Supported LBA-Change 00:07:45.985 Read (02h): Supported 00:07:45.985 Compare (05h): Supported 00:07:45.985 Write Zeroes (08h): Supported LBA-Change 00:07:45.985 Dataset Management (09h): Supported LBA-Change 00:07:45.985 Unknown (0Ch): Supported 00:07:45.985 Unknown (12h): Supported 00:07:45.985 Copy (19h): Supported LBA-Change 00:07:45.985 Unknown (1Dh): Supported LBA-Change 00:07:45.985 00:07:45.985 Error Log 00:07:45.985 ========= 00:07:45.985 00:07:45.985 Arbitration 00:07:45.985 =========== 00:07:45.986 Arbitration Burst: no limit 00:07:45.986 00:07:45.986 Power Management 00:07:45.986 ================ 00:07:45.986 Number of Power States: 1 00:07:45.986 Current Power State: Power State #0 00:07:45.986 Power State #0: 00:07:45.986 Max Power: 25.00 W 00:07:45.986 Non-Operational State: Operational 00:07:45.986 Entry Latency: 16 microseconds 00:07:45.986 Exit Latency: 4 microseconds 00:07:45.986 Relative Read Throughput: 0 00:07:45.986 Relative Read Latency: 0 00:07:45.986 Relative Write Throughput: 0 00:07:45.986 Relative Write Latency: 0 00:07:45.986 Idle Power: Not Reported 00:07:45.986 Active Power: Not Reported 00:07:45.986 Non-Operational Permissive Mode: Not Supported 00:07:45.986 00:07:45.986 Health Information 00:07:45.986 ================== 00:07:45.986 Critical Warnings: 00:07:45.986 Available Spare Space: OK 00:07:45.986 Temperature: OK 00:07:45.986 Device Reliability: OK 00:07:45.986 Read Only: No 00:07:45.986 Volatile Memory Backup: OK 00:07:45.986 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.986 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.986 Available Spare: 0% 00:07:45.986 Available Spare Threshold: 0% 00:07:45.986 Life Percentage Used: 0% 00:07:45.986 Data Units Read: 876 00:07:45.986 Data Units Written: 805 00:07:45.986 Host Read Commands: 36498 00:07:45.986 Host Write Commands: 35922 00:07:45.986 Controller Busy Time: 0 minutes 00:07:45.986 Power Cycles: 0 00:07:45.986 Power On Hours: 0 hours 00:07:45.986 Unsafe Shutdowns: 0 00:07:45.986 Unrecoverable Media Errors: 0 00:07:45.986 Lifetime Error Log Entries: 0 00:07:45.986 Warning Temperature Time: 0 minutes 00:07:45.986 Critical Temperature Time: 0 minutes 00:07:45.986 00:07:45.986 Number of Queues 00:07:45.986 ================ 00:07:45.986 Number of I/O Submission Queues: 64 00:07:45.986 Number of I/O Completion Queues: 64 00:07:45.986 00:07:45.986 ZNS Specific Controller Data 00:07:45.986 ============================ 00:07:45.986 Zone Append Size Limit: 0 00:07:45.986 00:07:45.986 00:07:45.986 Active Namespaces 00:07:45.986 ================= 00:07:45.986 Namespace ID:1 00:07:45.986 Error Recovery Timeout: Unlimited 00:07:45.986 Command Set Identifier: NVM (00h) 00:07:45.986 Deallocate: Supported 00:07:45.986 Deallocated/Unwritten Error: Supported 00:07:45.986 Deallocated Read Value: All 0x00 00:07:45.986 Deallocate in Write Zeroes: Not Supported 00:07:45.986 Deallocated Guard Field: 0xFFFF 00:07:45.986 Flush: Supported 00:07:45.986 Reservation: Not Supported 00:07:45.986 Namespace Sharing Capabilities: Multiple Controllers 00:07:45.986 Size (in LBAs): 262144 (1GiB) 00:07:45.986 Capacity (in LBAs): 262144 (1GiB) 00:07:45.986 Utilization (in LBAs): 262144 (1GiB) 00:07:45.986 Thin Provisioning: Not Supported 00:07:45.986 Per-NS Atomic Units: No 00:07:45.986 Maximum Single Source Range Length: 128 00:07:45.986 Maximum Copy Length: 128 00:07:45.986 Maximum Source Range Count: 128 00:07:45.986 NGUID/EUI64 Never Reused: No 00:07:45.986 Namespace Write Protected: No 00:07:45.986 Endurance group ID: 1 00:07:45.986 Number of LBA Formats: 8 00:07:45.986 Current LBA Format: LBA Format #04 00:07:45.986 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.986 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.986 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.986 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.986 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.986 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.986 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.986 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.986 00:07:45.986 Get Feature FDP: 00:07:45.986 ================ 00:07:45.986 Enabled: Yes 00:07:45.986 FDP configuration index: 0 00:07:45.986 00:07:45.986 FDP configurations log page 00:07:45.986 =========================== 00:07:45.986 Number of FDP configurations: 1 00:07:45.986 Version: 0 00:07:45.986 Size: 112 00:07:45.986 FDP Configuration Descriptor: 0 00:07:45.986 Descriptor Size: 96 00:07:45.986 Reclaim Group Identifier format: 2 00:07:45.986 FDP Volatile Write Cache: Not Present 00:07:45.986 FDP Configuration: Valid 00:07:45.986 Vendor Specific Size: 0 00:07:45.986 Number of Reclaim Groups: 2 00:07:45.986 Number of Recalim Unit Handles: 8 00:07:45.986 Max Placement Identifiers: 128 00:07:45.986 Number of Namespaces Suppprted: 256 00:07:45.986 Reclaim unit Nominal Size: 6000000 bytes 00:07:45.986 Estimated Reclaim Unit Time Limit: Not Reported 00:07:45.986 RUH Desc #000: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #001: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #002: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #003: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #004: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #005: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #006: RUH Type: Initially Isolated 00:07:45.986 RUH Desc #007: RUH Type: Initially Isolated 00:07:45.986 00:07:45.986 FDP reclaim unit handle usage log page 00:07:45.986 ====================================== 00:07:45.986 Number of Reclaim Unit Handles: 8 00:07:45.986 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:45.986 RUH Usage Desc #001: RUH Attributes: Unused 00:07:45.986 RUH Usage Desc #002: RUH Attributes: Unused 00:07:45.986 RUH Usage Desc #003: RUH Attributes: Unused 00:07:45.986 RUH Usage Desc #004: RUH Attributes: Unused 00:07:45.986 RUH Usage Desc #005: RUH Attributes: Unused 00:07:45.986 RUH Usage Desc #006: RUH Attributes: Unused 00:07:45.986 RUH Usage Desc #007: RUH Attributes: Unused 00:07:45.986 00:07:45.986 FDP statistics log page 00:07:45.986 ======================= 00:07:45.986 Host bytes with metadata written: 508207104 00:07:45.986 Media bytes with metadata written: 508276736 00:07:45.986 Media bytes erased: 0 00:07:45.986 00:07:45.986 FDP events log page 00:07:45.986 =================== 00:07:45.986 Number of FDP events: 0 00:07:45.986 00:07:45.986 NVM Specific Namespace Data 00:07:45.986 =========================== 00:07:45.986 Logical Block Storage Tag Mask: 0 00:07:45.986 Protection Information Capabilities: 00:07:45.986 16b Guard Protection Information Storage Tag Support: No 00:07:45.986 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.986 Storage Tag Check Read Support: No 00:07:45.986 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.986 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.986 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.986 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.986 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.986 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.987 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.987 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.987 00:07:45.987 real 0m1.216s 00:07:45.987 user 0m0.439s 00:07:45.987 sys 0m0.548s 00:07:45.987 ************************************ 00:07:45.987 END TEST nvme_identify 00:07:45.987 ************************************ 00:07:45.987 23:09:18 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.987 23:09:18 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:46.247 23:09:18 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:46.247 23:09:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.247 23:09:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.247 23:09:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.247 ************************************ 00:07:46.247 START TEST nvme_perf 00:07:46.247 ************************************ 00:07:46.247 23:09:18 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:46.247 23:09:18 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:47.632 Initializing NVMe Controllers 00:07:47.633 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.633 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.633 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.633 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.633 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:47.633 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:47.633 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:47.633 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:47.633 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:47.633 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:47.633 Initialization complete. Launching workers. 00:07:47.633 ======================================================== 00:07:47.633 Latency(us) 00:07:47.633 Device Information : IOPS MiB/s Average min max 00:07:47.633 PCIE (0000:00:13.0) NSID 1 from core 0: 11977.84 140.37 10705.88 6422.65 40157.48 00:07:47.633 PCIE (0000:00:10.0) NSID 1 from core 0: 11977.84 140.37 10688.90 6099.12 38612.88 00:07:47.633 PCIE (0000:00:11.0) NSID 1 from core 0: 11977.84 140.37 10672.42 6195.39 36802.07 00:07:47.633 PCIE (0000:00:12.0) NSID 1 from core 0: 11977.84 140.37 10653.49 6200.53 35575.95 00:07:47.633 PCIE (0000:00:12.0) NSID 2 from core 0: 11977.84 140.37 10635.87 6320.78 33969.44 00:07:47.633 PCIE (0000:00:12.0) NSID 3 from core 0: 12041.55 141.11 10561.90 6470.80 27123.28 00:07:47.633 ======================================================== 00:07:47.633 Total : 71930.73 842.94 10653.00 6099.12 40157.48 00:07:47.633 00:07:47.633 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:47.633 ================================================================================= 00:07:47.633 1.00000% : 7461.022us 00:07:47.633 10.00000% : 8670.917us 00:07:47.633 25.00000% : 9225.452us 00:07:47.633 50.00000% : 10183.286us 00:07:47.633 75.00000% : 11443.594us 00:07:47.633 90.00000% : 13208.025us 00:07:47.633 95.00000% : 14014.622us 00:07:47.633 98.00000% : 15022.868us 00:07:47.633 99.00000% : 31255.631us 00:07:47.633 99.50000% : 38716.652us 00:07:47.633 99.90000% : 39926.548us 00:07:47.633 99.99000% : 40128.197us 00:07:47.633 99.99900% : 40329.846us 00:07:47.633 99.99990% : 40329.846us 00:07:47.633 99.99999% : 40329.846us 00:07:47.633 00:07:47.633 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:47.633 ================================================================================= 00:07:47.633 1.00000% : 7662.671us 00:07:47.633 10.00000% : 8620.505us 00:07:47.633 25.00000% : 9225.452us 00:07:47.633 50.00000% : 10183.286us 00:07:47.633 75.00000% : 11393.182us 00:07:47.633 90.00000% : 13107.200us 00:07:47.633 95.00000% : 14014.622us 00:07:47.633 98.00000% : 15022.868us 00:07:47.633 99.00000% : 30045.735us 00:07:47.633 99.50000% : 37103.458us 00:07:47.633 99.90000% : 38313.354us 00:07:47.633 99.99000% : 38716.652us 00:07:47.633 99.99900% : 38716.652us 00:07:47.633 99.99990% : 38716.652us 00:07:47.633 99.99999% : 38716.652us 00:07:47.633 00:07:47.633 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:47.633 ================================================================================= 00:07:47.633 1.00000% : 7662.671us 00:07:47.633 10.00000% : 8620.505us 00:07:47.633 25.00000% : 9225.452us 00:07:47.633 50.00000% : 10183.286us 00:07:47.633 75.00000% : 11342.769us 00:07:47.633 90.00000% : 13107.200us 00:07:47.633 95.00000% : 14014.622us 00:07:47.633 98.00000% : 15325.342us 00:07:47.633 99.00000% : 28230.892us 00:07:47.633 99.50000% : 35288.615us 00:07:47.633 99.90000% : 36498.511us 00:07:47.633 99.99000% : 36901.809us 00:07:47.633 99.99900% : 36901.809us 00:07:47.633 99.99990% : 36901.809us 00:07:47.633 99.99999% : 36901.809us 00:07:47.633 00:07:47.633 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:47.633 ================================================================================= 00:07:47.633 1.00000% : 7612.258us 00:07:47.633 10.00000% : 8670.917us 00:07:47.633 25.00000% : 9225.452us 00:07:47.633 50.00000% : 10132.874us 00:07:47.633 75.00000% : 11393.182us 00:07:47.633 90.00000% : 13107.200us 00:07:47.633 95.00000% : 14014.622us 00:07:47.633 98.00000% : 15426.166us 00:07:47.633 99.00000% : 27827.594us 00:07:47.633 99.50000% : 34078.720us 00:07:47.633 99.90000% : 35288.615us 00:07:47.633 99.99000% : 35691.914us 00:07:47.633 99.99900% : 35691.914us 00:07:47.633 99.99990% : 35691.914us 00:07:47.633 99.99999% : 35691.914us 00:07:47.633 00:07:47.633 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:47.633 ================================================================================= 00:07:47.633 1.00000% : 7511.434us 00:07:47.633 10.00000% : 8670.917us 00:07:47.633 25.00000% : 9225.452us 00:07:47.633 50.00000% : 10132.874us 00:07:47.633 75.00000% : 11443.594us 00:07:47.633 90.00000% : 13107.200us 00:07:47.633 95.00000% : 14014.622us 00:07:47.633 98.00000% : 15325.342us 00:07:47.633 99.00000% : 26214.400us 00:07:47.633 99.50000% : 32465.526us 00:07:47.633 99.90000% : 33675.422us 00:07:47.633 99.99000% : 34078.720us 00:07:47.633 99.99900% : 34078.720us 00:07:47.633 99.99990% : 34078.720us 00:07:47.633 99.99999% : 34078.720us 00:07:47.633 00:07:47.633 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:47.633 ================================================================================= 00:07:47.633 1.00000% : 7410.609us 00:07:47.633 10.00000% : 8670.917us 00:07:47.633 25.00000% : 9225.452us 00:07:47.633 50.00000% : 10183.286us 00:07:47.633 75.00000% : 11443.594us 00:07:47.633 90.00000% : 13208.025us 00:07:47.633 95.00000% : 13913.797us 00:07:47.633 98.00000% : 15224.517us 00:07:47.633 99.00000% : 18350.080us 00:07:47.633 99.50000% : 25609.452us 00:07:47.633 99.90000% : 26819.348us 00:07:47.633 99.99000% : 27222.646us 00:07:47.633 99.99900% : 27222.646us 00:07:47.633 99.99990% : 27222.646us 00:07:47.633 99.99999% : 27222.646us 00:07:47.633 00:07:47.633 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:47.633 ============================================================================== 00:07:47.633 Range in us Cumulative IO count 00:07:47.633 6402.363 - 6427.569: 0.0083% ( 1) 00:07:47.633 6427.569 - 6452.775: 0.0166% ( 1) 00:07:47.633 6452.775 - 6503.188: 0.0499% ( 4) 00:07:47.633 6503.188 - 6553.600: 0.0748% ( 3) 00:07:47.633 6553.600 - 6604.012: 0.0997% ( 3) 00:07:47.633 6604.012 - 6654.425: 0.1330% ( 4) 00:07:47.633 6654.425 - 6704.837: 0.1579% ( 3) 00:07:47.633 6704.837 - 6755.249: 0.2410% ( 10) 00:07:47.633 6755.249 - 6805.662: 0.2909% ( 6) 00:07:47.633 6805.662 - 6856.074: 0.3491% ( 7) 00:07:47.633 6856.074 - 6906.486: 0.4072% ( 7) 00:07:47.633 6906.486 - 6956.898: 0.4654% ( 7) 00:07:47.633 6956.898 - 7007.311: 0.5236% ( 7) 00:07:47.633 7007.311 - 7057.723: 0.5652% ( 5) 00:07:47.633 7057.723 - 7108.135: 0.6316% ( 8) 00:07:47.633 7108.135 - 7158.548: 0.6898% ( 7) 00:07:47.633 7158.548 - 7208.960: 0.7480% ( 7) 00:07:47.633 7208.960 - 7259.372: 0.8062% ( 7) 00:07:47.633 7259.372 - 7309.785: 0.8644% ( 7) 00:07:47.633 7309.785 - 7360.197: 0.9309% ( 8) 00:07:47.633 7360.197 - 7410.609: 0.9890% ( 7) 00:07:47.633 7410.609 - 7461.022: 1.0472% ( 7) 00:07:47.633 7461.022 - 7511.434: 1.0971% ( 6) 00:07:47.633 7511.434 - 7561.846: 1.1553% ( 7) 00:07:47.633 7561.846 - 7612.258: 1.2134% ( 7) 00:07:47.633 7612.258 - 7662.671: 1.2467% ( 4) 00:07:47.633 7662.671 - 7713.083: 1.2716% ( 3) 00:07:47.633 7713.083 - 7763.495: 1.3880% ( 14) 00:07:47.633 7763.495 - 7813.908: 1.4877% ( 12) 00:07:47.633 7813.908 - 7864.320: 1.5957% ( 13) 00:07:47.633 7864.320 - 7914.732: 1.7204% ( 15) 00:07:47.633 7914.732 - 7965.145: 1.8368% ( 14) 00:07:47.633 7965.145 - 8015.557: 2.0529% ( 26) 00:07:47.633 8015.557 - 8065.969: 2.5183% ( 56) 00:07:47.633 8065.969 - 8116.382: 2.9422% ( 51) 00:07:47.633 8116.382 - 8166.794: 3.3577% ( 50) 00:07:47.633 8166.794 - 8217.206: 3.9644% ( 73) 00:07:47.633 8217.206 - 8267.618: 4.5462% ( 70) 00:07:47.633 8267.618 - 8318.031: 5.0615% ( 62) 00:07:47.633 8318.031 - 8368.443: 5.6184% ( 67) 00:07:47.633 8368.443 - 8418.855: 6.2583% ( 77) 00:07:47.633 8418.855 - 8469.268: 7.0894% ( 100) 00:07:47.633 8469.268 - 8519.680: 7.9122% ( 99) 00:07:47.633 8519.680 - 8570.092: 8.8597% ( 114) 00:07:47.633 8570.092 - 8620.505: 9.7906% ( 112) 00:07:47.633 8620.505 - 8670.917: 10.8211% ( 124) 00:07:47.633 8670.917 - 8721.329: 12.0346% ( 146) 00:07:47.633 8721.329 - 8771.742: 13.4142% ( 166) 00:07:47.633 8771.742 - 8822.154: 14.6609% ( 150) 00:07:47.633 8822.154 - 8872.566: 15.9491% ( 155) 00:07:47.633 8872.566 - 8922.978: 17.2706% ( 159) 00:07:47.633 8922.978 - 8973.391: 18.4757% ( 145) 00:07:47.633 8973.391 - 9023.803: 19.7557% ( 154) 00:07:47.633 9023.803 - 9074.215: 21.1270% ( 165) 00:07:47.633 9074.215 - 9124.628: 22.4235% ( 156) 00:07:47.633 9124.628 - 9175.040: 23.8032% ( 166) 00:07:47.634 9175.040 - 9225.452: 25.0914% ( 155) 00:07:47.634 9225.452 - 9275.865: 26.4295% ( 161) 00:07:47.634 9275.865 - 9326.277: 27.8840% ( 175) 00:07:47.634 9326.277 - 9376.689: 29.2969% ( 170) 00:07:47.634 9376.689 - 9427.102: 30.6017% ( 157) 00:07:47.634 9427.102 - 9477.514: 32.1725% ( 189) 00:07:47.634 9477.514 - 9527.926: 33.5938% ( 171) 00:07:47.634 9527.926 - 9578.338: 34.9900% ( 168) 00:07:47.634 9578.338 - 9628.751: 36.3863% ( 168) 00:07:47.634 9628.751 - 9679.163: 37.8324% ( 174) 00:07:47.634 9679.163 - 9729.575: 39.1705% ( 161) 00:07:47.634 9729.575 - 9779.988: 40.5170% ( 162) 00:07:47.634 9779.988 - 9830.400: 41.7719% ( 151) 00:07:47.634 9830.400 - 9880.812: 42.9023% ( 136) 00:07:47.634 9880.812 - 9931.225: 44.0741% ( 141) 00:07:47.634 9931.225 - 9981.637: 45.4870% ( 170) 00:07:47.634 9981.637 - 10032.049: 46.8168% ( 160) 00:07:47.634 10032.049 - 10082.462: 48.0552% ( 149) 00:07:47.634 10082.462 - 10132.874: 49.2686% ( 146) 00:07:47.634 10132.874 - 10183.286: 50.4571% ( 143) 00:07:47.634 10183.286 - 10233.698: 51.6041% ( 138) 00:07:47.634 10233.698 - 10284.111: 52.8258% ( 147) 00:07:47.634 10284.111 - 10334.523: 54.0475% ( 147) 00:07:47.634 10334.523 - 10384.935: 55.2443% ( 144) 00:07:47.634 10384.935 - 10435.348: 56.5991% ( 163) 00:07:47.634 10435.348 - 10485.760: 57.9455% ( 162) 00:07:47.634 10485.760 - 10536.172: 59.1090% ( 140) 00:07:47.634 10536.172 - 10586.585: 60.2311% ( 135) 00:07:47.634 10586.585 - 10636.997: 61.3946% ( 140) 00:07:47.634 10636.997 - 10687.409: 62.4252% ( 124) 00:07:47.634 10687.409 - 10737.822: 63.3727% ( 114) 00:07:47.634 10737.822 - 10788.234: 64.3035% ( 112) 00:07:47.634 10788.234 - 10838.646: 65.3341% ( 124) 00:07:47.634 10838.646 - 10889.058: 66.2068% ( 105) 00:07:47.634 10889.058 - 10939.471: 67.0628% ( 103) 00:07:47.634 10939.471 - 10989.883: 67.9604% ( 108) 00:07:47.634 10989.883 - 11040.295: 68.8497% ( 107) 00:07:47.634 11040.295 - 11090.708: 69.8886% ( 125) 00:07:47.634 11090.708 - 11141.120: 70.7945% ( 109) 00:07:47.634 11141.120 - 11191.532: 71.6838% ( 107) 00:07:47.634 11191.532 - 11241.945: 72.6230% ( 113) 00:07:47.634 11241.945 - 11292.357: 73.4458% ( 99) 00:07:47.634 11292.357 - 11342.769: 74.2021% ( 91) 00:07:47.634 11342.769 - 11393.182: 74.9086% ( 85) 00:07:47.634 11393.182 - 11443.594: 75.5735% ( 80) 00:07:47.634 11443.594 - 11494.006: 76.1719% ( 72) 00:07:47.634 11494.006 - 11544.418: 76.8201% ( 78) 00:07:47.634 11544.418 - 11594.831: 77.4019% ( 70) 00:07:47.634 11594.831 - 11645.243: 78.0003% ( 72) 00:07:47.634 11645.243 - 11695.655: 78.6070% ( 73) 00:07:47.634 11695.655 - 11746.068: 79.2553% ( 78) 00:07:47.634 11746.068 - 11796.480: 79.9368% ( 82) 00:07:47.634 11796.480 - 11846.892: 80.5851% ( 78) 00:07:47.634 11846.892 - 11897.305: 81.2251% ( 77) 00:07:47.634 11897.305 - 11947.717: 81.7819% ( 67) 00:07:47.634 11947.717 - 11998.129: 82.2390% ( 55) 00:07:47.634 11998.129 - 12048.542: 82.6878% ( 54) 00:07:47.634 12048.542 - 12098.954: 83.1449% ( 55) 00:07:47.634 12098.954 - 12149.366: 83.6187% ( 57) 00:07:47.634 12149.366 - 12199.778: 84.0176% ( 48) 00:07:47.634 12199.778 - 12250.191: 84.3750% ( 43) 00:07:47.634 12250.191 - 12300.603: 84.7324% ( 43) 00:07:47.634 12300.603 - 12351.015: 85.0898% ( 43) 00:07:47.634 12351.015 - 12401.428: 85.4471% ( 43) 00:07:47.634 12401.428 - 12451.840: 85.8128% ( 44) 00:07:47.634 12451.840 - 12502.252: 86.1203% ( 37) 00:07:47.634 12502.252 - 12552.665: 86.4112% ( 35) 00:07:47.634 12552.665 - 12603.077: 86.7936% ( 46) 00:07:47.634 12603.077 - 12653.489: 87.1094% ( 38) 00:07:47.634 12653.489 - 12703.902: 87.4086% ( 36) 00:07:47.634 12703.902 - 12754.314: 87.7078% ( 36) 00:07:47.634 12754.314 - 12804.726: 88.0070% ( 36) 00:07:47.634 12804.726 - 12855.138: 88.2646% ( 31) 00:07:47.634 12855.138 - 12905.551: 88.5638% ( 36) 00:07:47.634 12905.551 - 13006.375: 89.2453% ( 82) 00:07:47.634 13006.375 - 13107.200: 89.9186% ( 81) 00:07:47.634 13107.200 - 13208.025: 90.5585% ( 77) 00:07:47.634 13208.025 - 13308.849: 91.3148% ( 91) 00:07:47.634 13308.849 - 13409.674: 91.9797% ( 80) 00:07:47.634 13409.674 - 13510.498: 92.6446% ( 80) 00:07:47.634 13510.498 - 13611.323: 93.3095% ( 80) 00:07:47.634 13611.323 - 13712.148: 93.8248% ( 62) 00:07:47.634 13712.148 - 13812.972: 94.3983% ( 69) 00:07:47.634 13812.972 - 13913.797: 94.9302% ( 64) 00:07:47.634 13913.797 - 14014.622: 95.3790% ( 54) 00:07:47.634 14014.622 - 14115.446: 95.7696% ( 47) 00:07:47.634 14115.446 - 14216.271: 96.1852% ( 50) 00:07:47.634 14216.271 - 14317.095: 96.5342% ( 42) 00:07:47.634 14317.095 - 14417.920: 96.8667% ( 40) 00:07:47.634 14417.920 - 14518.745: 97.1742% ( 37) 00:07:47.634 14518.745 - 14619.569: 97.4402% ( 32) 00:07:47.634 14619.569 - 14720.394: 97.6562% ( 26) 00:07:47.634 14720.394 - 14821.218: 97.8474% ( 23) 00:07:47.634 14821.218 - 14922.043: 97.9555% ( 13) 00:07:47.634 14922.043 - 15022.868: 98.0718% ( 14) 00:07:47.634 15022.868 - 15123.692: 98.1799% ( 13) 00:07:47.634 15123.692 - 15224.517: 98.2713% ( 11) 00:07:47.634 15224.517 - 15325.342: 98.3876% ( 14) 00:07:47.634 15325.342 - 15426.166: 98.5040% ( 14) 00:07:47.634 15426.166 - 15526.991: 98.5705% ( 8) 00:07:47.634 15526.991 - 15627.815: 98.6287% ( 7) 00:07:47.634 15627.815 - 15728.640: 98.6785% ( 6) 00:07:47.634 15728.640 - 15829.465: 98.7284% ( 6) 00:07:47.634 15829.465 - 15930.289: 98.7866% ( 7) 00:07:47.634 15930.289 - 16031.114: 98.8447% ( 7) 00:07:47.634 16031.114 - 16131.938: 98.9029% ( 7) 00:07:47.634 16131.938 - 16232.763: 98.9362% ( 4) 00:07:47.634 30852.332 - 31053.982: 98.9528% ( 2) 00:07:47.634 31053.982 - 31255.631: 99.0110% ( 7) 00:07:47.634 31255.631 - 31457.280: 99.0691% ( 7) 00:07:47.634 31457.280 - 31658.929: 99.1439% ( 9) 00:07:47.634 31658.929 - 31860.578: 99.2021% ( 7) 00:07:47.634 31860.578 - 32062.228: 99.2686% ( 8) 00:07:47.634 32062.228 - 32263.877: 99.3351% ( 8) 00:07:47.634 32263.877 - 32465.526: 99.4016% ( 8) 00:07:47.634 32465.526 - 32667.175: 99.4681% ( 8) 00:07:47.634 38515.003 - 38716.652: 99.5263% ( 7) 00:07:47.634 38716.652 - 38918.302: 99.5928% ( 8) 00:07:47.634 38918.302 - 39119.951: 99.6592% ( 8) 00:07:47.634 39119.951 - 39321.600: 99.7257% ( 8) 00:07:47.634 39321.600 - 39523.249: 99.7922% ( 8) 00:07:47.634 39523.249 - 39724.898: 99.8587% ( 8) 00:07:47.634 39724.898 - 39926.548: 99.9252% ( 8) 00:07:47.634 39926.548 - 40128.197: 99.9917% ( 8) 00:07:47.634 40128.197 - 40329.846: 100.0000% ( 1) 00:07:47.634 00:07:47.634 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:47.634 ============================================================================== 00:07:47.634 Range in us Cumulative IO count 00:07:47.634 6074.683 - 6099.889: 0.0083% ( 1) 00:07:47.634 6125.095 - 6150.302: 0.0332% ( 3) 00:07:47.634 6150.302 - 6175.508: 0.0416% ( 1) 00:07:47.634 6200.714 - 6225.920: 0.0748% ( 4) 00:07:47.634 6251.126 - 6276.332: 0.0914% ( 2) 00:07:47.634 6276.332 - 6301.538: 0.0997% ( 1) 00:07:47.634 6326.745 - 6351.951: 0.1330% ( 4) 00:07:47.634 6377.157 - 6402.363: 0.1413% ( 1) 00:07:47.634 6402.363 - 6427.569: 0.1579% ( 2) 00:07:47.634 6427.569 - 6452.775: 0.1745% ( 2) 00:07:47.634 6452.775 - 6503.188: 0.1912% ( 2) 00:07:47.634 6503.188 - 6553.600: 0.2161% ( 3) 00:07:47.634 6553.600 - 6604.012: 0.2410% ( 3) 00:07:47.634 6604.012 - 6654.425: 0.2660% ( 3) 00:07:47.634 6654.425 - 6704.837: 0.2909% ( 3) 00:07:47.634 6704.837 - 6755.249: 0.3158% ( 3) 00:07:47.634 6755.249 - 6805.662: 0.3657% ( 6) 00:07:47.634 6805.662 - 6856.074: 0.4156% ( 6) 00:07:47.634 6856.074 - 6906.486: 0.4737% ( 7) 00:07:47.634 6906.486 - 6956.898: 0.5153% ( 5) 00:07:47.634 6956.898 - 7007.311: 0.5652% ( 6) 00:07:47.634 7007.311 - 7057.723: 0.6067% ( 5) 00:07:47.634 7057.723 - 7108.135: 0.6566% ( 6) 00:07:47.634 7108.135 - 7158.548: 0.7064% ( 6) 00:07:47.634 7158.548 - 7208.960: 0.7563% ( 6) 00:07:47.634 7208.960 - 7259.372: 0.7812% ( 3) 00:07:47.634 7259.372 - 7309.785: 0.8062% ( 3) 00:07:47.634 7309.785 - 7360.197: 0.8228% ( 2) 00:07:47.634 7360.197 - 7410.609: 0.8561% ( 4) 00:07:47.634 7410.609 - 7461.022: 0.8727% ( 2) 00:07:47.634 7461.022 - 7511.434: 0.9142% ( 5) 00:07:47.634 7511.434 - 7561.846: 0.9558% ( 5) 00:07:47.634 7561.846 - 7612.258: 0.9973% ( 5) 00:07:47.634 7612.258 - 7662.671: 1.0971% ( 12) 00:07:47.635 7662.671 - 7713.083: 1.1719% ( 9) 00:07:47.635 7713.083 - 7763.495: 1.2550% ( 10) 00:07:47.635 7763.495 - 7813.908: 1.3547% ( 12) 00:07:47.635 7813.908 - 7864.320: 1.5209% ( 20) 00:07:47.635 7864.320 - 7914.732: 1.6955% ( 21) 00:07:47.635 7914.732 - 7965.145: 1.9614% ( 32) 00:07:47.635 7965.145 - 8015.557: 2.2440% ( 34) 00:07:47.635 8015.557 - 8065.969: 2.5515% ( 37) 00:07:47.635 8065.969 - 8116.382: 3.0003% ( 54) 00:07:47.635 8116.382 - 8166.794: 3.5738% ( 69) 00:07:47.635 8166.794 - 8217.206: 4.0808% ( 61) 00:07:47.635 8217.206 - 8267.618: 4.5795% ( 60) 00:07:47.635 8267.618 - 8318.031: 5.1695% ( 71) 00:07:47.635 8318.031 - 8368.443: 5.9757% ( 97) 00:07:47.635 8368.443 - 8418.855: 6.7570% ( 94) 00:07:47.635 8418.855 - 8469.268: 7.5549% ( 96) 00:07:47.635 8469.268 - 8519.680: 8.3777% ( 99) 00:07:47.635 8519.680 - 8570.092: 9.3085% ( 112) 00:07:47.635 8570.092 - 8620.505: 10.2144% ( 109) 00:07:47.635 8620.505 - 8670.917: 11.2367% ( 123) 00:07:47.635 8670.917 - 8721.329: 12.2756% ( 125) 00:07:47.635 8721.329 - 8771.742: 13.3311% ( 127) 00:07:47.635 8771.742 - 8822.154: 14.5695% ( 149) 00:07:47.635 8822.154 - 8872.566: 15.9574% ( 167) 00:07:47.635 8872.566 - 8922.978: 17.1376% ( 142) 00:07:47.635 8922.978 - 8973.391: 18.6420% ( 181) 00:07:47.635 8973.391 - 9023.803: 20.0133% ( 165) 00:07:47.635 9023.803 - 9074.215: 21.3431% ( 160) 00:07:47.635 9074.215 - 9124.628: 22.5898% ( 150) 00:07:47.635 9124.628 - 9175.040: 23.8364% ( 150) 00:07:47.635 9175.040 - 9225.452: 25.6233% ( 215) 00:07:47.635 9225.452 - 9275.865: 27.1110% ( 179) 00:07:47.635 9275.865 - 9326.277: 28.4741% ( 164) 00:07:47.635 9326.277 - 9376.689: 29.9950% ( 183) 00:07:47.635 9376.689 - 9427.102: 31.2916% ( 156) 00:07:47.635 9427.102 - 9477.514: 32.5964% ( 157) 00:07:47.635 9477.514 - 9527.926: 34.0259% ( 172) 00:07:47.635 9527.926 - 9578.338: 35.2726% ( 150) 00:07:47.635 9578.338 - 9628.751: 36.6024% ( 160) 00:07:47.635 9628.751 - 9679.163: 37.9488% ( 162) 00:07:47.635 9679.163 - 9729.575: 39.0791% ( 136) 00:07:47.635 9729.575 - 9779.988: 40.4505% ( 165) 00:07:47.635 9779.988 - 9830.400: 41.6307% ( 142) 00:07:47.635 9830.400 - 9880.812: 42.9854% ( 163) 00:07:47.635 9880.812 - 9931.225: 44.4232% ( 173) 00:07:47.635 9931.225 - 9981.637: 45.6533% ( 148) 00:07:47.635 9981.637 - 10032.049: 46.9082% ( 151) 00:07:47.635 10032.049 - 10082.462: 48.1383% ( 148) 00:07:47.635 10082.462 - 10132.874: 49.2686% ( 136) 00:07:47.635 10132.874 - 10183.286: 50.4239% ( 139) 00:07:47.635 10183.286 - 10233.698: 51.5708% ( 138) 00:07:47.635 10233.698 - 10284.111: 52.8424% ( 153) 00:07:47.635 10284.111 - 10334.523: 53.9395% ( 132) 00:07:47.635 10334.523 - 10384.935: 54.9867% ( 126) 00:07:47.635 10384.935 - 10435.348: 56.0422% ( 127) 00:07:47.635 10435.348 - 10485.760: 57.2058% ( 140) 00:07:47.635 10485.760 - 10536.172: 58.4608% ( 151) 00:07:47.635 10536.172 - 10586.585: 59.3833% ( 111) 00:07:47.635 10586.585 - 10636.997: 60.6466% ( 152) 00:07:47.635 10636.997 - 10687.409: 61.8185% ( 141) 00:07:47.635 10687.409 - 10737.822: 62.9322% ( 134) 00:07:47.635 10737.822 - 10788.234: 64.0376% ( 133) 00:07:47.635 10788.234 - 10838.646: 65.1513% ( 134) 00:07:47.635 10838.646 - 10889.058: 66.4062% ( 151) 00:07:47.635 10889.058 - 10939.471: 67.3537% ( 114) 00:07:47.635 10939.471 - 10989.883: 68.3261% ( 117) 00:07:47.635 10989.883 - 11040.295: 69.1905% ( 104) 00:07:47.635 11040.295 - 11090.708: 70.1463% ( 115) 00:07:47.635 11090.708 - 11141.120: 71.0771% ( 112) 00:07:47.635 11141.120 - 11191.532: 71.7586% ( 82) 00:07:47.635 11191.532 - 11241.945: 72.7144% ( 115) 00:07:47.635 11241.945 - 11292.357: 73.5539% ( 101) 00:07:47.635 11292.357 - 11342.769: 74.3767% ( 99) 00:07:47.635 11342.769 - 11393.182: 75.2161% ( 101) 00:07:47.635 11393.182 - 11443.594: 76.0389% ( 99) 00:07:47.635 11443.594 - 11494.006: 76.6124% ( 69) 00:07:47.635 11494.006 - 11544.418: 77.2606% ( 78) 00:07:47.635 11544.418 - 11594.831: 77.7344% ( 57) 00:07:47.635 11594.831 - 11645.243: 78.2497% ( 62) 00:07:47.635 11645.243 - 11695.655: 78.7068% ( 55) 00:07:47.635 11695.655 - 11746.068: 79.2470% ( 65) 00:07:47.635 11746.068 - 11796.480: 79.7207% ( 57) 00:07:47.635 11796.480 - 11846.892: 80.0864% ( 44) 00:07:47.635 11846.892 - 11897.305: 80.5851% ( 60) 00:07:47.635 11897.305 - 11947.717: 81.0422% ( 55) 00:07:47.635 11947.717 - 11998.129: 81.4412% ( 48) 00:07:47.635 11998.129 - 12048.542: 81.8733% ( 52) 00:07:47.635 12048.542 - 12098.954: 82.4053% ( 64) 00:07:47.635 12098.954 - 12149.366: 82.9122% ( 61) 00:07:47.635 12149.366 - 12199.778: 83.2613% ( 42) 00:07:47.635 12199.778 - 12250.191: 83.8098% ( 66) 00:07:47.635 12250.191 - 12300.603: 84.2836% ( 57) 00:07:47.635 12300.603 - 12351.015: 84.5994% ( 38) 00:07:47.635 12351.015 - 12401.428: 84.9900% ( 47) 00:07:47.635 12401.428 - 12451.840: 85.3890% ( 48) 00:07:47.635 12451.840 - 12502.252: 85.7547% ( 44) 00:07:47.635 12502.252 - 12552.665: 86.1120% ( 43) 00:07:47.635 12552.665 - 12603.077: 86.4943% ( 46) 00:07:47.635 12603.077 - 12653.489: 86.8767% ( 46) 00:07:47.635 12653.489 - 12703.902: 87.2673% ( 47) 00:07:47.635 12703.902 - 12754.314: 87.5997% ( 40) 00:07:47.635 12754.314 - 12804.726: 88.0568% ( 55) 00:07:47.635 12804.726 - 12855.138: 88.4309% ( 45) 00:07:47.635 12855.138 - 12905.551: 88.8630% ( 52) 00:07:47.635 12905.551 - 13006.375: 89.5695% ( 85) 00:07:47.635 13006.375 - 13107.200: 90.3258% ( 91) 00:07:47.635 13107.200 - 13208.025: 90.8826% ( 67) 00:07:47.635 13208.025 - 13308.849: 91.5392% ( 79) 00:07:47.635 13308.849 - 13409.674: 92.1875% ( 78) 00:07:47.635 13409.674 - 13510.498: 92.8025% ( 74) 00:07:47.635 13510.498 - 13611.323: 93.4009% ( 72) 00:07:47.635 13611.323 - 13712.148: 93.9910% ( 71) 00:07:47.635 13712.148 - 13812.972: 94.4980% ( 61) 00:07:47.635 13812.972 - 13913.797: 94.8720% ( 45) 00:07:47.635 13913.797 - 14014.622: 95.2211% ( 42) 00:07:47.635 14014.622 - 14115.446: 95.6366% ( 50) 00:07:47.635 14115.446 - 14216.271: 96.0106% ( 45) 00:07:47.635 14216.271 - 14317.095: 96.2849% ( 33) 00:07:47.635 14317.095 - 14417.920: 96.5924% ( 37) 00:07:47.635 14417.920 - 14518.745: 96.8750% ( 34) 00:07:47.635 14518.745 - 14619.569: 97.1991% ( 39) 00:07:47.635 14619.569 - 14720.394: 97.4568% ( 31) 00:07:47.635 14720.394 - 14821.218: 97.6812% ( 27) 00:07:47.635 14821.218 - 14922.043: 97.9305% ( 30) 00:07:47.635 14922.043 - 15022.868: 98.0801% ( 18) 00:07:47.635 15022.868 - 15123.692: 98.2297% ( 18) 00:07:47.635 15123.692 - 15224.517: 98.3378% ( 13) 00:07:47.635 15224.517 - 15325.342: 98.4541% ( 14) 00:07:47.635 15325.342 - 15426.166: 98.6370% ( 22) 00:07:47.635 15426.166 - 15526.991: 98.7699% ( 16) 00:07:47.635 15526.991 - 15627.815: 98.8447% ( 9) 00:07:47.635 15627.815 - 15728.640: 98.8946% ( 6) 00:07:47.635 15728.640 - 15829.465: 98.9279% ( 4) 00:07:47.635 15829.465 - 15930.289: 98.9362% ( 1) 00:07:47.635 29642.437 - 29844.086: 98.9528% ( 2) 00:07:47.635 29844.086 - 30045.735: 99.1190% ( 20) 00:07:47.635 30247.385 - 30449.034: 99.1772% ( 7) 00:07:47.635 30449.034 - 30650.683: 99.2354% ( 7) 00:07:47.635 30650.683 - 30852.332: 99.2852% ( 6) 00:07:47.635 30852.332 - 31053.982: 99.3517% ( 8) 00:07:47.635 31053.982 - 31255.631: 99.4265% ( 9) 00:07:47.635 31255.631 - 31457.280: 99.4681% ( 5) 00:07:47.635 36700.160 - 36901.809: 99.4930% ( 3) 00:07:47.635 36901.809 - 37103.458: 99.5595% ( 8) 00:07:47.635 37103.458 - 37305.108: 99.6177% ( 7) 00:07:47.635 37305.108 - 37506.757: 99.6592% ( 5) 00:07:47.635 37506.757 - 37708.406: 99.7340% ( 9) 00:07:47.635 37708.406 - 37910.055: 99.7839% ( 6) 00:07:47.635 37910.055 - 38111.705: 99.8421% ( 7) 00:07:47.635 38111.705 - 38313.354: 99.9003% ( 7) 00:07:47.635 38313.354 - 38515.003: 99.9584% ( 7) 00:07:47.635 38515.003 - 38716.652: 100.0000% ( 5) 00:07:47.635 00:07:47.635 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:47.635 ============================================================================== 00:07:47.635 Range in us Cumulative IO count 00:07:47.635 6175.508 - 6200.714: 0.0166% ( 2) 00:07:47.635 6200.714 - 6225.920: 0.0249% ( 1) 00:07:47.635 6225.920 - 6251.126: 0.0332% ( 1) 00:07:47.635 6251.126 - 6276.332: 0.0499% ( 2) 00:07:47.635 6276.332 - 6301.538: 0.0665% ( 2) 00:07:47.635 6301.538 - 6326.745: 0.0831% ( 2) 00:07:47.635 6326.745 - 6351.951: 0.0914% ( 1) 00:07:47.635 6351.951 - 6377.157: 0.1080% ( 2) 00:07:47.635 6377.157 - 6402.363: 0.1247% ( 2) 00:07:47.635 6402.363 - 6427.569: 0.1330% ( 1) 00:07:47.635 6427.569 - 6452.775: 0.1496% ( 2) 00:07:47.636 6452.775 - 6503.188: 0.1912% ( 5) 00:07:47.636 6503.188 - 6553.600: 0.2161% ( 3) 00:07:47.636 6553.600 - 6604.012: 0.2493% ( 4) 00:07:47.636 6604.012 - 6654.425: 0.2743% ( 3) 00:07:47.636 6654.425 - 6704.837: 0.2992% ( 3) 00:07:47.636 6704.837 - 6755.249: 0.3241% ( 3) 00:07:47.636 6755.249 - 6805.662: 0.3574% ( 4) 00:07:47.636 6805.662 - 6856.074: 0.3823% ( 3) 00:07:47.636 6856.074 - 6906.486: 0.4156% ( 4) 00:07:47.636 6906.486 - 6956.898: 0.4405% ( 3) 00:07:47.636 6956.898 - 7007.311: 0.4654% ( 3) 00:07:47.636 7007.311 - 7057.723: 0.5070% ( 5) 00:07:47.636 7057.723 - 7108.135: 0.6233% ( 14) 00:07:47.636 7108.135 - 7158.548: 0.7397% ( 14) 00:07:47.636 7158.548 - 7208.960: 0.7646% ( 3) 00:07:47.636 7208.960 - 7259.372: 0.7729% ( 1) 00:07:47.636 7259.372 - 7309.785: 0.7812% ( 1) 00:07:47.636 7309.785 - 7360.197: 0.7896% ( 1) 00:07:47.636 7360.197 - 7410.609: 0.8228% ( 4) 00:07:47.636 7410.609 - 7461.022: 0.8477% ( 3) 00:07:47.636 7461.022 - 7511.434: 0.8976% ( 6) 00:07:47.636 7511.434 - 7561.846: 0.9475% ( 6) 00:07:47.636 7561.846 - 7612.258: 0.9890% ( 5) 00:07:47.636 7612.258 - 7662.671: 1.0721% ( 10) 00:07:47.636 7662.671 - 7713.083: 1.1636% ( 11) 00:07:47.636 7713.083 - 7763.495: 1.2799% ( 14) 00:07:47.636 7763.495 - 7813.908: 1.3880% ( 13) 00:07:47.636 7813.908 - 7864.320: 1.4877% ( 12) 00:07:47.636 7864.320 - 7914.732: 1.6124% ( 15) 00:07:47.636 7914.732 - 7965.145: 1.7703% ( 19) 00:07:47.636 7965.145 - 8015.557: 2.0030% ( 28) 00:07:47.636 8015.557 - 8065.969: 2.2357% ( 28) 00:07:47.636 8065.969 - 8116.382: 2.6263% ( 47) 00:07:47.636 8116.382 - 8166.794: 2.9837% ( 43) 00:07:47.636 8166.794 - 8217.206: 3.4408% ( 55) 00:07:47.636 8217.206 - 8267.618: 4.0725% ( 76) 00:07:47.636 8267.618 - 8318.031: 4.8787% ( 97) 00:07:47.636 8318.031 - 8368.443: 5.5685% ( 83) 00:07:47.636 8368.443 - 8418.855: 6.2334% ( 80) 00:07:47.636 8418.855 - 8469.268: 7.0977% ( 104) 00:07:47.636 8469.268 - 8519.680: 8.0868% ( 119) 00:07:47.636 8519.680 - 8570.092: 9.2088% ( 135) 00:07:47.636 8570.092 - 8620.505: 10.3391% ( 136) 00:07:47.636 8620.505 - 8670.917: 11.4279% ( 131) 00:07:47.636 8670.917 - 8721.329: 12.4335% ( 121) 00:07:47.636 8721.329 - 8771.742: 13.6386% ( 145) 00:07:47.636 8771.742 - 8822.154: 14.7440% ( 133) 00:07:47.636 8822.154 - 8872.566: 15.9076% ( 140) 00:07:47.636 8872.566 - 8922.978: 16.9631% ( 127) 00:07:47.636 8922.978 - 8973.391: 18.1682% ( 145) 00:07:47.636 8973.391 - 9023.803: 19.5312% ( 164) 00:07:47.636 9023.803 - 9074.215: 20.9192% ( 167) 00:07:47.636 9074.215 - 9124.628: 22.3155% ( 168) 00:07:47.636 9124.628 - 9175.040: 23.7367% ( 171) 00:07:47.636 9175.040 - 9225.452: 25.1745% ( 173) 00:07:47.636 9225.452 - 9275.865: 26.6041% ( 172) 00:07:47.636 9275.865 - 9326.277: 27.9505% ( 162) 00:07:47.636 9326.277 - 9376.689: 29.4215% ( 177) 00:07:47.636 9376.689 - 9427.102: 30.7680% ( 162) 00:07:47.636 9427.102 - 9477.514: 32.1809% ( 170) 00:07:47.636 9477.514 - 9527.926: 33.5938% ( 170) 00:07:47.636 9527.926 - 9578.338: 34.9900% ( 168) 00:07:47.636 9578.338 - 9628.751: 36.4112% ( 171) 00:07:47.636 9628.751 - 9679.163: 37.8158% ( 169) 00:07:47.636 9679.163 - 9729.575: 39.1290% ( 158) 00:07:47.636 9729.575 - 9779.988: 40.4671% ( 161) 00:07:47.636 9779.988 - 9830.400: 41.6805% ( 146) 00:07:47.636 9830.400 - 9880.812: 42.8607% ( 142) 00:07:47.636 9880.812 - 9931.225: 44.1240% ( 152) 00:07:47.636 9931.225 - 9981.637: 45.3707% ( 150) 00:07:47.636 9981.637 - 10032.049: 46.6838% ( 158) 00:07:47.636 10032.049 - 10082.462: 47.8308% ( 138) 00:07:47.636 10082.462 - 10132.874: 49.0276% ( 144) 00:07:47.636 10132.874 - 10183.286: 50.2410% ( 146) 00:07:47.636 10183.286 - 10233.698: 51.4628% ( 147) 00:07:47.636 10233.698 - 10284.111: 52.5598% ( 132) 00:07:47.636 10284.111 - 10334.523: 53.7400% ( 142) 00:07:47.636 10334.523 - 10384.935: 54.8454% ( 133) 00:07:47.636 10384.935 - 10435.348: 56.0505% ( 145) 00:07:47.636 10435.348 - 10485.760: 57.3886% ( 161) 00:07:47.636 10485.760 - 10536.172: 58.7766% ( 167) 00:07:47.636 10536.172 - 10586.585: 60.0648% ( 155) 00:07:47.636 10586.585 - 10636.997: 61.3281% ( 152) 00:07:47.636 10636.997 - 10687.409: 62.4917% ( 140) 00:07:47.636 10687.409 - 10737.822: 63.7051% ( 146) 00:07:47.636 10737.822 - 10788.234: 64.8604% ( 139) 00:07:47.636 10788.234 - 10838.646: 65.9990% ( 137) 00:07:47.636 10838.646 - 10889.058: 67.1127% ( 134) 00:07:47.636 10889.058 - 10939.471: 68.1184% ( 121) 00:07:47.636 10939.471 - 10989.883: 69.1572% ( 125) 00:07:47.636 10989.883 - 11040.295: 70.2543% ( 132) 00:07:47.636 11040.295 - 11090.708: 71.1519% ( 108) 00:07:47.636 11090.708 - 11141.120: 72.0246% ( 105) 00:07:47.636 11141.120 - 11191.532: 72.7892% ( 92) 00:07:47.636 11191.532 - 11241.945: 73.6037% ( 98) 00:07:47.636 11241.945 - 11292.357: 74.3351% ( 88) 00:07:47.636 11292.357 - 11342.769: 75.0249% ( 83) 00:07:47.636 11342.769 - 11393.182: 75.7480% ( 87) 00:07:47.636 11393.182 - 11443.594: 76.4545% ( 85) 00:07:47.636 11443.594 - 11494.006: 77.1692% ( 86) 00:07:47.636 11494.006 - 11544.418: 77.5931% ( 51) 00:07:47.636 11544.418 - 11594.831: 78.0419% ( 54) 00:07:47.636 11594.831 - 11645.243: 78.4907% ( 54) 00:07:47.636 11645.243 - 11695.655: 78.9312% ( 53) 00:07:47.636 11695.655 - 11746.068: 79.4963% ( 68) 00:07:47.636 11746.068 - 11796.480: 79.9784% ( 58) 00:07:47.636 11796.480 - 11846.892: 80.4189% ( 53) 00:07:47.636 11846.892 - 11897.305: 80.9092% ( 59) 00:07:47.636 11897.305 - 11947.717: 81.4328% ( 63) 00:07:47.636 11947.717 - 11998.129: 81.8983% ( 56) 00:07:47.636 11998.129 - 12048.542: 82.3221% ( 51) 00:07:47.636 12048.542 - 12098.954: 82.7128% ( 47) 00:07:47.636 12098.954 - 12149.366: 83.1200% ( 49) 00:07:47.636 12149.366 - 12199.778: 83.5189% ( 48) 00:07:47.636 12199.778 - 12250.191: 83.9013% ( 46) 00:07:47.636 12250.191 - 12300.603: 84.2919% ( 47) 00:07:47.636 12300.603 - 12351.015: 84.6326% ( 41) 00:07:47.636 12351.015 - 12401.428: 84.9817% ( 42) 00:07:47.636 12401.428 - 12451.840: 85.3391% ( 43) 00:07:47.636 12451.840 - 12502.252: 85.7796% ( 53) 00:07:47.636 12502.252 - 12552.665: 86.1785% ( 48) 00:07:47.636 12552.665 - 12603.077: 86.5608% ( 46) 00:07:47.636 12603.077 - 12653.489: 86.9348% ( 45) 00:07:47.636 12653.489 - 12703.902: 87.2590% ( 39) 00:07:47.636 12703.902 - 12754.314: 87.6330% ( 45) 00:07:47.636 12754.314 - 12804.726: 88.0070% ( 45) 00:07:47.637 12804.726 - 12855.138: 88.4225% ( 50) 00:07:47.637 12855.138 - 12905.551: 88.8049% ( 46) 00:07:47.637 12905.551 - 13006.375: 89.5612% ( 91) 00:07:47.637 13006.375 - 13107.200: 90.3757% ( 98) 00:07:47.637 13107.200 - 13208.025: 91.2566% ( 106) 00:07:47.637 13208.025 - 13308.849: 91.9963% ( 89) 00:07:47.637 13308.849 - 13409.674: 92.7277% ( 88) 00:07:47.637 13409.674 - 13510.498: 93.3178% ( 71) 00:07:47.637 13510.498 - 13611.323: 93.8664% ( 66) 00:07:47.637 13611.323 - 13712.148: 94.3152% ( 54) 00:07:47.637 13712.148 - 13812.972: 94.7058% ( 47) 00:07:47.637 13812.972 - 13913.797: 94.9717% ( 32) 00:07:47.637 13913.797 - 14014.622: 95.2626% ( 35) 00:07:47.637 14014.622 - 14115.446: 95.5452% ( 34) 00:07:47.637 14115.446 - 14216.271: 95.8029% ( 31) 00:07:47.637 14216.271 - 14317.095: 96.0688% ( 32) 00:07:47.637 14317.095 - 14417.920: 96.3182% ( 30) 00:07:47.637 14417.920 - 14518.745: 96.5426% ( 27) 00:07:47.637 14518.745 - 14619.569: 96.7919% ( 30) 00:07:47.637 14619.569 - 14720.394: 97.0578% ( 32) 00:07:47.637 14720.394 - 14821.218: 97.1825% ( 15) 00:07:47.637 14821.218 - 14922.043: 97.3155% ( 16) 00:07:47.637 14922.043 - 15022.868: 97.4485% ( 16) 00:07:47.637 15022.868 - 15123.692: 97.6396% ( 23) 00:07:47.637 15123.692 - 15224.517: 97.8391% ( 24) 00:07:47.637 15224.517 - 15325.342: 98.0303% ( 23) 00:07:47.637 15325.342 - 15426.166: 98.1799% ( 18) 00:07:47.637 15426.166 - 15526.991: 98.3128% ( 16) 00:07:47.637 15526.991 - 15627.815: 98.4292% ( 14) 00:07:47.637 15627.815 - 15728.640: 98.5123% ( 10) 00:07:47.637 15728.640 - 15829.465: 98.5871% ( 9) 00:07:47.637 15829.465 - 15930.289: 98.6702% ( 10) 00:07:47.637 15930.289 - 16031.114: 98.7533% ( 10) 00:07:47.637 16031.114 - 16131.938: 98.8531% ( 12) 00:07:47.637 16131.938 - 16232.763: 98.9195% ( 8) 00:07:47.637 16232.763 - 16333.588: 98.9362% ( 2) 00:07:47.637 27827.594 - 28029.243: 98.9528% ( 2) 00:07:47.637 28029.243 - 28230.892: 99.0110% ( 7) 00:07:47.637 28230.892 - 28432.542: 99.0442% ( 4) 00:07:47.637 28432.542 - 28634.191: 99.1107% ( 8) 00:07:47.637 28634.191 - 28835.840: 99.1689% ( 7) 00:07:47.637 28835.840 - 29037.489: 99.2354% ( 8) 00:07:47.637 29037.489 - 29239.138: 99.3019% ( 8) 00:07:47.637 29239.138 - 29440.788: 99.3600% ( 7) 00:07:47.637 29440.788 - 29642.437: 99.4265% ( 8) 00:07:47.637 29642.437 - 29844.086: 99.4681% ( 5) 00:07:47.637 35086.966 - 35288.615: 99.5263% ( 7) 00:07:47.637 35288.615 - 35490.265: 99.5928% ( 8) 00:07:47.637 35490.265 - 35691.914: 99.6509% ( 7) 00:07:47.637 35691.914 - 35893.563: 99.7091% ( 7) 00:07:47.637 35893.563 - 36095.212: 99.7756% ( 8) 00:07:47.637 36095.212 - 36296.862: 99.8421% ( 8) 00:07:47.637 36296.862 - 36498.511: 99.9003% ( 7) 00:07:47.637 36498.511 - 36700.160: 99.9584% ( 7) 00:07:47.637 36700.160 - 36901.809: 100.0000% ( 5) 00:07:47.637 00:07:47.637 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:47.637 ============================================================================== 00:07:47.637 Range in us Cumulative IO count 00:07:47.637 6175.508 - 6200.714: 0.0083% ( 1) 00:07:47.637 6200.714 - 6225.920: 0.0166% ( 1) 00:07:47.637 6225.920 - 6251.126: 0.0332% ( 2) 00:07:47.637 6251.126 - 6276.332: 0.0499% ( 2) 00:07:47.637 6276.332 - 6301.538: 0.0582% ( 1) 00:07:47.637 6301.538 - 6326.745: 0.0748% ( 2) 00:07:47.637 6326.745 - 6351.951: 0.0831% ( 1) 00:07:47.637 6351.951 - 6377.157: 0.0997% ( 2) 00:07:47.637 6377.157 - 6402.363: 0.1164% ( 2) 00:07:47.637 6402.363 - 6427.569: 0.1330% ( 2) 00:07:47.637 6427.569 - 6452.775: 0.1496% ( 2) 00:07:47.637 6452.775 - 6503.188: 0.1828% ( 4) 00:07:47.637 6503.188 - 6553.600: 0.2078% ( 3) 00:07:47.637 6553.600 - 6604.012: 0.2327% ( 3) 00:07:47.637 6604.012 - 6654.425: 0.2660% ( 4) 00:07:47.637 6654.425 - 6704.837: 0.2909% ( 3) 00:07:47.637 6704.837 - 6755.249: 0.3241% ( 4) 00:07:47.637 6755.249 - 6805.662: 0.3491% ( 3) 00:07:47.637 6805.662 - 6856.074: 0.3823% ( 4) 00:07:47.637 6856.074 - 6906.486: 0.4072% ( 3) 00:07:47.637 6906.486 - 6956.898: 0.4322% ( 3) 00:07:47.637 6956.898 - 7007.311: 0.4571% ( 3) 00:07:47.637 7007.311 - 7057.723: 0.5153% ( 7) 00:07:47.637 7057.723 - 7108.135: 0.5652% ( 6) 00:07:47.637 7108.135 - 7158.548: 0.6150% ( 6) 00:07:47.637 7158.548 - 7208.960: 0.6400% ( 3) 00:07:47.637 7208.960 - 7259.372: 0.6649% ( 3) 00:07:47.637 7259.372 - 7309.785: 0.6981% ( 4) 00:07:47.637 7309.785 - 7360.197: 0.7397% ( 5) 00:07:47.637 7360.197 - 7410.609: 0.7896% ( 6) 00:07:47.637 7410.609 - 7461.022: 0.8561% ( 8) 00:07:47.637 7461.022 - 7511.434: 0.9059% ( 6) 00:07:47.637 7511.434 - 7561.846: 0.9641% ( 7) 00:07:47.637 7561.846 - 7612.258: 1.0140% ( 6) 00:07:47.637 7612.258 - 7662.671: 1.0638% ( 6) 00:07:47.637 7662.671 - 7713.083: 1.1303% ( 8) 00:07:47.637 7713.083 - 7763.495: 1.1802% ( 6) 00:07:47.637 7763.495 - 7813.908: 1.2716% ( 11) 00:07:47.637 7813.908 - 7864.320: 1.3630% ( 11) 00:07:47.637 7864.320 - 7914.732: 1.4461% ( 10) 00:07:47.637 7914.732 - 7965.145: 1.5376% ( 11) 00:07:47.637 7965.145 - 8015.557: 1.6539% ( 14) 00:07:47.637 8015.557 - 8065.969: 1.7869% ( 16) 00:07:47.637 8065.969 - 8116.382: 2.0030% ( 26) 00:07:47.637 8116.382 - 8166.794: 2.2606% ( 31) 00:07:47.637 8166.794 - 8217.206: 2.6513% ( 47) 00:07:47.637 8217.206 - 8267.618: 3.2829% ( 76) 00:07:47.637 8267.618 - 8318.031: 3.9229% ( 77) 00:07:47.637 8318.031 - 8368.443: 4.4963% ( 69) 00:07:47.637 8368.443 - 8418.855: 5.2028% ( 85) 00:07:47.637 8418.855 - 8469.268: 5.8760% ( 81) 00:07:47.637 8469.268 - 8519.680: 6.7487% ( 105) 00:07:47.637 8519.680 - 8570.092: 7.8457% ( 132) 00:07:47.637 8570.092 - 8620.505: 8.9927% ( 138) 00:07:47.637 8620.505 - 8670.917: 10.0981% ( 133) 00:07:47.637 8670.917 - 8721.329: 11.3032% ( 145) 00:07:47.637 8721.329 - 8771.742: 12.5831% ( 154) 00:07:47.637 8771.742 - 8822.154: 13.8713% ( 155) 00:07:47.637 8822.154 - 8872.566: 15.2261% ( 163) 00:07:47.637 8872.566 - 8922.978: 16.9132% ( 203) 00:07:47.637 8922.978 - 8973.391: 18.5173% ( 193) 00:07:47.637 8973.391 - 9023.803: 19.9884% ( 177) 00:07:47.637 9023.803 - 9074.215: 21.3597% ( 165) 00:07:47.637 9074.215 - 9124.628: 22.8723% ( 182) 00:07:47.637 9124.628 - 9175.040: 24.3933% ( 183) 00:07:47.637 9175.040 - 9225.452: 25.8976% ( 181) 00:07:47.637 9225.452 - 9275.865: 27.4435% ( 186) 00:07:47.637 9275.865 - 9326.277: 28.9644% ( 183) 00:07:47.637 9326.277 - 9376.689: 30.4438% ( 178) 00:07:47.637 9376.689 - 9427.102: 31.8318% ( 167) 00:07:47.637 9427.102 - 9477.514: 33.3029% ( 177) 00:07:47.637 9477.514 - 9527.926: 34.7656% ( 176) 00:07:47.637 9527.926 - 9578.338: 36.1287% ( 164) 00:07:47.637 9578.338 - 9628.751: 37.4751% ( 162) 00:07:47.637 9628.751 - 9679.163: 38.8880% ( 170) 00:07:47.637 9679.163 - 9729.575: 40.2178% ( 160) 00:07:47.637 9729.575 - 9779.988: 41.4977% ( 154) 00:07:47.637 9779.988 - 9830.400: 42.7277% ( 148) 00:07:47.637 9830.400 - 9880.812: 44.0160% ( 155) 00:07:47.637 9880.812 - 9931.225: 45.3873% ( 165) 00:07:47.637 9931.225 - 9981.637: 46.8501% ( 176) 00:07:47.637 9981.637 - 10032.049: 48.1965% ( 162) 00:07:47.637 10032.049 - 10082.462: 49.4432% ( 150) 00:07:47.637 10082.462 - 10132.874: 50.7314% ( 155) 00:07:47.637 10132.874 - 10183.286: 51.9864% ( 151) 00:07:47.637 10183.286 - 10233.698: 53.2164% ( 148) 00:07:47.637 10233.698 - 10284.111: 54.4132% ( 144) 00:07:47.637 10284.111 - 10334.523: 55.7430% ( 160) 00:07:47.637 10334.523 - 10384.935: 57.1393% ( 168) 00:07:47.637 10384.935 - 10435.348: 58.4026% ( 152) 00:07:47.638 10435.348 - 10485.760: 59.6493% ( 150) 00:07:47.638 10485.760 - 10536.172: 60.9126% ( 152) 00:07:47.638 10536.172 - 10586.585: 61.9515% ( 125) 00:07:47.638 10586.585 - 10636.997: 63.0319% ( 130) 00:07:47.638 10636.997 - 10687.409: 64.0209% ( 119) 00:07:47.638 10687.409 - 10737.822: 65.1097% ( 131) 00:07:47.638 10737.822 - 10788.234: 66.1154% ( 121) 00:07:47.638 10788.234 - 10838.646: 66.9465% ( 100) 00:07:47.638 10838.646 - 10889.058: 67.8441% ( 108) 00:07:47.638 10889.058 - 10939.471: 68.6752% ( 100) 00:07:47.638 10939.471 - 10989.883: 69.5645% ( 107) 00:07:47.638 10989.883 - 11040.295: 70.4621% ( 108) 00:07:47.638 11040.295 - 11090.708: 71.3015% ( 101) 00:07:47.638 11090.708 - 11141.120: 72.0578% ( 91) 00:07:47.638 11141.120 - 11191.532: 72.7643% ( 85) 00:07:47.638 11191.532 - 11241.945: 73.4874% ( 87) 00:07:47.638 11241.945 - 11292.357: 74.1523% ( 80) 00:07:47.638 11292.357 - 11342.769: 74.7340% ( 70) 00:07:47.638 11342.769 - 11393.182: 75.2743% ( 65) 00:07:47.638 11393.182 - 11443.594: 75.8145% ( 65) 00:07:47.638 11443.594 - 11494.006: 76.3713% ( 67) 00:07:47.638 11494.006 - 11544.418: 77.0113% ( 77) 00:07:47.638 11544.418 - 11594.831: 77.5765% ( 68) 00:07:47.638 11594.831 - 11645.243: 78.2330% ( 79) 00:07:47.638 11645.243 - 11695.655: 78.7899% ( 67) 00:07:47.638 11695.655 - 11746.068: 79.3883% ( 72) 00:07:47.638 11746.068 - 11796.480: 79.9368% ( 66) 00:07:47.638 11796.480 - 11846.892: 80.4604% ( 63) 00:07:47.638 11846.892 - 11897.305: 80.9342% ( 57) 00:07:47.638 11897.305 - 11947.717: 81.3830% ( 54) 00:07:47.638 11947.717 - 11998.129: 81.8401% ( 55) 00:07:47.638 11998.129 - 12048.542: 82.2723% ( 52) 00:07:47.638 12048.542 - 12098.954: 82.7128% ( 53) 00:07:47.638 12098.954 - 12149.366: 83.1034% ( 47) 00:07:47.638 12149.366 - 12199.778: 83.4774% ( 45) 00:07:47.638 12199.778 - 12250.191: 83.8431% ( 44) 00:07:47.638 12250.191 - 12300.603: 84.2753% ( 52) 00:07:47.638 12300.603 - 12351.015: 84.7407% ( 56) 00:07:47.638 12351.015 - 12401.428: 85.1978% ( 55) 00:07:47.638 12401.428 - 12451.840: 85.6051% ( 49) 00:07:47.638 12451.840 - 12502.252: 85.9458% ( 41) 00:07:47.638 12502.252 - 12552.665: 86.2949% ( 42) 00:07:47.638 12552.665 - 12603.077: 86.6439% ( 42) 00:07:47.638 12603.077 - 12653.489: 86.9598% ( 38) 00:07:47.638 12653.489 - 12703.902: 87.2839% ( 39) 00:07:47.638 12703.902 - 12754.314: 87.5582% ( 33) 00:07:47.638 12754.314 - 12804.726: 87.9322% ( 45) 00:07:47.638 12804.726 - 12855.138: 88.3477% ( 50) 00:07:47.638 12855.138 - 12905.551: 88.6719% ( 39) 00:07:47.638 12905.551 - 13006.375: 89.3700% ( 84) 00:07:47.638 13006.375 - 13107.200: 90.0848% ( 86) 00:07:47.638 13107.200 - 13208.025: 90.8910% ( 97) 00:07:47.638 13208.025 - 13308.849: 91.5143% ( 75) 00:07:47.638 13308.849 - 13409.674: 92.0711% ( 67) 00:07:47.638 13409.674 - 13510.498: 92.7610% ( 83) 00:07:47.638 13510.498 - 13611.323: 93.3428% ( 70) 00:07:47.638 13611.323 - 13712.148: 93.9079% ( 68) 00:07:47.638 13712.148 - 13812.972: 94.3484% ( 53) 00:07:47.638 13812.972 - 13913.797: 94.7058% ( 43) 00:07:47.638 13913.797 - 14014.622: 95.0133% ( 37) 00:07:47.638 14014.622 - 14115.446: 95.2876% ( 33) 00:07:47.638 14115.446 - 14216.271: 95.5951% ( 37) 00:07:47.638 14216.271 - 14317.095: 96.0023% ( 49) 00:07:47.638 14317.095 - 14417.920: 96.3930% ( 47) 00:07:47.638 14417.920 - 14518.745: 96.6090% ( 26) 00:07:47.638 14518.745 - 14619.569: 96.7503% ( 17) 00:07:47.638 14619.569 - 14720.394: 96.9581% ( 25) 00:07:47.638 14720.394 - 14821.218: 97.1742% ( 26) 00:07:47.638 14821.218 - 14922.043: 97.4152% ( 29) 00:07:47.638 14922.043 - 15022.868: 97.5981% ( 22) 00:07:47.638 15022.868 - 15123.692: 97.7477% ( 18) 00:07:47.638 15123.692 - 15224.517: 97.8391% ( 11) 00:07:47.638 15224.517 - 15325.342: 97.9305% ( 11) 00:07:47.638 15325.342 - 15426.166: 98.0718% ( 17) 00:07:47.638 15426.166 - 15526.991: 98.2131% ( 17) 00:07:47.638 15526.991 - 15627.815: 98.3295% ( 14) 00:07:47.638 15627.815 - 15728.640: 98.4292% ( 12) 00:07:47.638 15728.640 - 15829.465: 98.5289% ( 12) 00:07:47.638 15829.465 - 15930.289: 98.6203% ( 11) 00:07:47.638 15930.289 - 16031.114: 98.7201% ( 12) 00:07:47.638 16031.114 - 16131.938: 98.7949% ( 9) 00:07:47.638 16131.938 - 16232.763: 98.8614% ( 8) 00:07:47.638 16232.763 - 16333.588: 98.9362% ( 9) 00:07:47.638 27424.295 - 27625.945: 98.9611% ( 3) 00:07:47.638 27625.945 - 27827.594: 99.0276% ( 8) 00:07:47.638 27827.594 - 28029.243: 99.0941% ( 8) 00:07:47.638 28029.243 - 28230.892: 99.1606% ( 8) 00:07:47.638 28230.892 - 28432.542: 99.2188% ( 7) 00:07:47.638 28432.542 - 28634.191: 99.2769% ( 7) 00:07:47.638 28634.191 - 28835.840: 99.3434% ( 8) 00:07:47.638 28835.840 - 29037.489: 99.4016% ( 7) 00:07:47.638 29037.489 - 29239.138: 99.4681% ( 8) 00:07:47.638 33877.071 - 34078.720: 99.5263% ( 7) 00:07:47.638 34078.720 - 34280.369: 99.5844% ( 7) 00:07:47.638 34280.369 - 34482.018: 99.6509% ( 8) 00:07:47.638 34482.018 - 34683.668: 99.7091% ( 7) 00:07:47.638 34683.668 - 34885.317: 99.7756% ( 8) 00:07:47.638 34885.317 - 35086.966: 99.8421% ( 8) 00:07:47.638 35086.966 - 35288.615: 99.9086% ( 8) 00:07:47.638 35288.615 - 35490.265: 99.9668% ( 7) 00:07:47.638 35490.265 - 35691.914: 100.0000% ( 4) 00:07:47.638 00:07:47.638 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:47.638 ============================================================================== 00:07:47.638 Range in us Cumulative IO count 00:07:47.638 6301.538 - 6326.745: 0.0083% ( 1) 00:07:47.638 6326.745 - 6351.951: 0.0249% ( 2) 00:07:47.638 6351.951 - 6377.157: 0.0332% ( 1) 00:07:47.638 6377.157 - 6402.363: 0.0582% ( 3) 00:07:47.638 6402.363 - 6427.569: 0.0665% ( 1) 00:07:47.638 6427.569 - 6452.775: 0.0914% ( 3) 00:07:47.638 6452.775 - 6503.188: 0.1247% ( 4) 00:07:47.638 6503.188 - 6553.600: 0.1496% ( 3) 00:07:47.638 6553.600 - 6604.012: 0.1745% ( 3) 00:07:47.638 6604.012 - 6654.425: 0.2078% ( 4) 00:07:47.638 6654.425 - 6704.837: 0.2410% ( 4) 00:07:47.638 6704.837 - 6755.249: 0.2660% ( 3) 00:07:47.638 6755.249 - 6805.662: 0.2992% ( 4) 00:07:47.638 6805.662 - 6856.074: 0.3241% ( 3) 00:07:47.638 6856.074 - 6906.486: 0.3906% ( 8) 00:07:47.638 6906.486 - 6956.898: 0.4405% ( 6) 00:07:47.638 6956.898 - 7007.311: 0.4987% ( 7) 00:07:47.638 7007.311 - 7057.723: 0.5568% ( 7) 00:07:47.639 7057.723 - 7108.135: 0.6150% ( 7) 00:07:47.639 7108.135 - 7158.548: 0.6732% ( 7) 00:07:47.639 7158.548 - 7208.960: 0.7314% ( 7) 00:07:47.639 7208.960 - 7259.372: 0.7646% ( 4) 00:07:47.639 7259.372 - 7309.785: 0.7979% ( 4) 00:07:47.639 7309.785 - 7360.197: 0.8477% ( 6) 00:07:47.639 7360.197 - 7410.609: 0.8976% ( 6) 00:07:47.639 7410.609 - 7461.022: 0.9558% ( 7) 00:07:47.639 7461.022 - 7511.434: 1.0140% ( 7) 00:07:47.639 7511.434 - 7561.846: 1.0888% ( 9) 00:07:47.639 7561.846 - 7612.258: 1.1636% ( 9) 00:07:47.639 7612.258 - 7662.671: 1.2550% ( 11) 00:07:47.639 7662.671 - 7713.083: 1.3381% ( 10) 00:07:47.639 7713.083 - 7763.495: 1.4129% ( 9) 00:07:47.639 7763.495 - 7813.908: 1.5126% ( 12) 00:07:47.639 7813.908 - 7864.320: 1.5625% ( 6) 00:07:47.639 7864.320 - 7914.732: 1.6124% ( 6) 00:07:47.639 7914.732 - 7965.145: 1.7121% ( 12) 00:07:47.639 7965.145 - 8015.557: 1.8451% ( 16) 00:07:47.639 8015.557 - 8065.969: 2.0196% ( 21) 00:07:47.639 8065.969 - 8116.382: 2.2606% ( 29) 00:07:47.639 8116.382 - 8166.794: 2.6180% ( 43) 00:07:47.639 8166.794 - 8217.206: 3.0502% ( 52) 00:07:47.639 8217.206 - 8267.618: 3.5987% ( 66) 00:07:47.639 8267.618 - 8318.031: 4.2719% ( 81) 00:07:47.639 8318.031 - 8368.443: 4.9368% ( 80) 00:07:47.639 8368.443 - 8418.855: 5.7430% ( 97) 00:07:47.639 8418.855 - 8469.268: 6.4079% ( 80) 00:07:47.639 8469.268 - 8519.680: 7.2640% ( 103) 00:07:47.639 8519.680 - 8570.092: 8.2779% ( 122) 00:07:47.639 8570.092 - 8620.505: 9.2753% ( 120) 00:07:47.639 8620.505 - 8670.917: 10.3973% ( 135) 00:07:47.639 8670.917 - 8721.329: 11.5442% ( 138) 00:07:47.639 8721.329 - 8771.742: 12.7826% ( 149) 00:07:47.639 8771.742 - 8822.154: 14.1622% ( 166) 00:07:47.639 8822.154 - 8872.566: 15.7247% ( 188) 00:07:47.639 8872.566 - 8922.978: 17.2291% ( 181) 00:07:47.639 8922.978 - 8973.391: 18.6420% ( 170) 00:07:47.639 8973.391 - 9023.803: 20.0549% ( 170) 00:07:47.639 9023.803 - 9074.215: 21.5093% ( 175) 00:07:47.639 9074.215 - 9124.628: 22.9388% ( 172) 00:07:47.639 9124.628 - 9175.040: 24.3434% ( 169) 00:07:47.639 9175.040 - 9225.452: 25.6649% ( 159) 00:07:47.639 9225.452 - 9275.865: 27.0944% ( 172) 00:07:47.639 9275.865 - 9326.277: 28.5987% ( 181) 00:07:47.639 9326.277 - 9376.689: 29.8870% ( 155) 00:07:47.639 9376.689 - 9427.102: 31.2916% ( 169) 00:07:47.639 9427.102 - 9477.514: 32.7709% ( 178) 00:07:47.639 9477.514 - 9527.926: 34.1423% ( 165) 00:07:47.639 9527.926 - 9578.338: 35.5053% ( 164) 00:07:47.639 9578.338 - 9628.751: 36.9182% ( 170) 00:07:47.639 9628.751 - 9679.163: 38.2563% ( 161) 00:07:47.639 9679.163 - 9729.575: 39.5362% ( 154) 00:07:47.639 9729.575 - 9779.988: 40.8743% ( 161) 00:07:47.639 9779.988 - 9830.400: 42.2291% ( 163) 00:07:47.639 9830.400 - 9880.812: 43.6004% ( 165) 00:07:47.639 9880.812 - 9931.225: 44.8969% ( 156) 00:07:47.639 9931.225 - 9981.637: 46.2932% ( 168) 00:07:47.639 9981.637 - 10032.049: 47.6729% ( 166) 00:07:47.639 10032.049 - 10082.462: 49.0027% ( 160) 00:07:47.639 10082.462 - 10132.874: 50.3158% ( 158) 00:07:47.639 10132.874 - 10183.286: 51.4877% ( 141) 00:07:47.639 10183.286 - 10233.698: 52.7593% ( 153) 00:07:47.639 10233.698 - 10284.111: 54.0725% ( 158) 00:07:47.639 10284.111 - 10334.523: 55.3441% ( 153) 00:07:47.639 10334.523 - 10384.935: 56.6074% ( 152) 00:07:47.639 10384.935 - 10435.348: 57.9039% ( 156) 00:07:47.639 10435.348 - 10485.760: 59.1506% ( 150) 00:07:47.639 10485.760 - 10536.172: 60.3557% ( 145) 00:07:47.639 10536.172 - 10586.585: 61.4694% ( 134) 00:07:47.639 10586.585 - 10636.997: 62.5582% ( 131) 00:07:47.639 10636.997 - 10687.409: 63.6303% ( 129) 00:07:47.639 10687.409 - 10737.822: 64.6858% ( 127) 00:07:47.639 10737.822 - 10788.234: 65.6333% ( 114) 00:07:47.639 10788.234 - 10838.646: 66.6057% ( 117) 00:07:47.639 10838.646 - 10889.058: 67.5116% ( 109) 00:07:47.639 10889.058 - 10939.471: 68.4092% ( 108) 00:07:47.639 10939.471 - 10989.883: 69.2487% ( 101) 00:07:47.639 10989.883 - 11040.295: 70.1297% ( 106) 00:07:47.639 11040.295 - 11090.708: 70.9192% ( 95) 00:07:47.639 11090.708 - 11141.120: 71.6755% ( 91) 00:07:47.639 11141.120 - 11191.532: 72.4069% ( 88) 00:07:47.639 11191.532 - 11241.945: 73.1051% ( 84) 00:07:47.639 11241.945 - 11292.357: 73.6702% ( 68) 00:07:47.639 11292.357 - 11342.769: 74.3351% ( 80) 00:07:47.639 11342.769 - 11393.182: 74.9003% ( 68) 00:07:47.639 11393.182 - 11443.594: 75.5735% ( 81) 00:07:47.639 11443.594 - 11494.006: 76.1469% ( 69) 00:07:47.639 11494.006 - 11544.418: 76.7952% ( 78) 00:07:47.639 11544.418 - 11594.831: 77.3438% ( 66) 00:07:47.639 11594.831 - 11645.243: 77.9172% ( 69) 00:07:47.639 11645.243 - 11695.655: 78.4824% ( 68) 00:07:47.639 11695.655 - 11746.068: 79.0475% ( 68) 00:07:47.639 11746.068 - 11796.480: 79.5545% ( 61) 00:07:47.639 11796.480 - 11846.892: 79.9618% ( 49) 00:07:47.639 11846.892 - 11897.305: 80.4521% ( 59) 00:07:47.639 11897.305 - 11947.717: 80.8843% ( 52) 00:07:47.639 11947.717 - 11998.129: 81.3165% ( 52) 00:07:47.639 11998.129 - 12048.542: 81.8068% ( 59) 00:07:47.639 12048.542 - 12098.954: 82.2390% ( 52) 00:07:47.639 12098.954 - 12149.366: 82.6546% ( 50) 00:07:47.639 12149.366 - 12199.778: 83.0951% ( 53) 00:07:47.639 12199.778 - 12250.191: 83.5688% ( 57) 00:07:47.639 12250.191 - 12300.603: 84.0509% ( 58) 00:07:47.639 12300.603 - 12351.015: 84.5246% ( 57) 00:07:47.639 12351.015 - 12401.428: 85.0316% ( 61) 00:07:47.639 12401.428 - 12451.840: 85.5469% ( 62) 00:07:47.639 12451.840 - 12502.252: 85.9874% ( 53) 00:07:47.639 12502.252 - 12552.665: 86.4445% ( 55) 00:07:47.639 12552.665 - 12603.077: 86.8684% ( 51) 00:07:47.639 12603.077 - 12653.489: 87.3088% ( 53) 00:07:47.639 12653.489 - 12703.902: 87.7078% ( 48) 00:07:47.639 12703.902 - 12754.314: 88.0485% ( 41) 00:07:47.639 12754.314 - 12804.726: 88.3644% ( 38) 00:07:47.639 12804.726 - 12855.138: 88.7301% ( 44) 00:07:47.639 12855.138 - 12905.551: 89.0957% ( 44) 00:07:47.639 12905.551 - 13006.375: 89.8687% ( 93) 00:07:47.639 13006.375 - 13107.200: 90.6333% ( 92) 00:07:47.639 13107.200 - 13208.025: 91.3730% ( 89) 00:07:47.639 13208.025 - 13308.849: 91.9049% ( 64) 00:07:47.639 13308.849 - 13409.674: 92.4036% ( 60) 00:07:47.639 13409.674 - 13510.498: 92.9521% ( 66) 00:07:47.639 13510.498 - 13611.323: 93.4425% ( 59) 00:07:47.639 13611.323 - 13712.148: 93.9744% ( 64) 00:07:47.639 13712.148 - 13812.972: 94.4648% ( 59) 00:07:47.639 13812.972 - 13913.797: 94.9219% ( 55) 00:07:47.639 13913.797 - 14014.622: 95.3291% ( 49) 00:07:47.639 14014.622 - 14115.446: 95.7114% ( 46) 00:07:47.639 14115.446 - 14216.271: 96.0688% ( 43) 00:07:47.639 14216.271 - 14317.095: 96.4013% ( 40) 00:07:47.639 14317.095 - 14417.920: 96.7171% ( 38) 00:07:47.639 14417.920 - 14518.745: 96.9830% ( 32) 00:07:47.639 14518.745 - 14619.569: 97.2158% ( 28) 00:07:47.639 14619.569 - 14720.394: 97.4402% ( 27) 00:07:47.639 14720.394 - 14821.218: 97.6064% ( 20) 00:07:47.639 14821.218 - 14922.043: 97.7227% ( 14) 00:07:47.640 14922.043 - 15022.868: 97.8225% ( 12) 00:07:47.640 15022.868 - 15123.692: 97.8557% ( 4) 00:07:47.640 15123.692 - 15224.517: 97.8890% ( 4) 00:07:47.640 15224.517 - 15325.342: 98.0053% ( 14) 00:07:47.640 15325.342 - 15426.166: 98.1134% ( 13) 00:07:47.640 15426.166 - 15526.991: 98.2214% ( 13) 00:07:47.640 15526.991 - 15627.815: 98.3295% ( 13) 00:07:47.640 15627.815 - 15728.640: 98.4292% ( 12) 00:07:47.640 15728.640 - 15829.465: 98.5372% ( 13) 00:07:47.640 15829.465 - 15930.289: 98.6453% ( 13) 00:07:47.640 15930.289 - 16031.114: 98.7450% ( 12) 00:07:47.640 16031.114 - 16131.938: 98.8364% ( 11) 00:07:47.640 16131.938 - 16232.763: 98.9195% ( 10) 00:07:47.640 16232.763 - 16333.588: 98.9362% ( 2) 00:07:47.640 25811.102 - 26012.751: 98.9943% ( 7) 00:07:47.640 26012.751 - 26214.400: 99.0525% ( 7) 00:07:47.640 26214.400 - 26416.049: 99.1190% ( 8) 00:07:47.640 26416.049 - 26617.698: 99.1772% ( 7) 00:07:47.640 26617.698 - 26819.348: 99.2437% ( 8) 00:07:47.640 26819.348 - 27020.997: 99.3019% ( 7) 00:07:47.640 27020.997 - 27222.646: 99.3684% ( 8) 00:07:47.640 27222.646 - 27424.295: 99.4265% ( 7) 00:07:47.640 27424.295 - 27625.945: 99.4681% ( 5) 00:07:47.640 32263.877 - 32465.526: 99.5263% ( 7) 00:07:47.640 32465.526 - 32667.175: 99.5844% ( 7) 00:07:47.640 32667.175 - 32868.825: 99.6509% ( 8) 00:07:47.640 32868.825 - 33070.474: 99.7091% ( 7) 00:07:47.640 33070.474 - 33272.123: 99.7756% ( 8) 00:07:47.640 33272.123 - 33473.772: 99.8421% ( 8) 00:07:47.640 33473.772 - 33675.422: 99.9003% ( 7) 00:07:47.640 33675.422 - 33877.071: 99.9668% ( 8) 00:07:47.640 33877.071 - 34078.720: 100.0000% ( 4) 00:07:47.640 00:07:47.640 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:47.640 ============================================================================== 00:07:47.640 Range in us Cumulative IO count 00:07:47.640 6452.775 - 6503.188: 0.0165% ( 2) 00:07:47.640 6503.188 - 6553.600: 0.0579% ( 5) 00:07:47.640 6553.600 - 6604.012: 0.0827% ( 3) 00:07:47.640 6604.012 - 6654.425: 0.1075% ( 3) 00:07:47.640 6654.425 - 6704.837: 0.1323% ( 3) 00:07:47.640 6704.837 - 6755.249: 0.1653% ( 4) 00:07:47.640 6755.249 - 6805.662: 0.2397% ( 9) 00:07:47.640 6805.662 - 6856.074: 0.3803% ( 17) 00:07:47.640 6856.074 - 6906.486: 0.4216% ( 5) 00:07:47.640 6906.486 - 6956.898: 0.4630% ( 5) 00:07:47.640 6956.898 - 7007.311: 0.5043% ( 5) 00:07:47.640 7007.311 - 7057.723: 0.5704% ( 8) 00:07:47.640 7057.723 - 7108.135: 0.6200% ( 6) 00:07:47.640 7108.135 - 7158.548: 0.6779% ( 7) 00:07:47.640 7158.548 - 7208.960: 0.7275% ( 6) 00:07:47.640 7208.960 - 7259.372: 0.8185% ( 11) 00:07:47.640 7259.372 - 7309.785: 0.9177% ( 12) 00:07:47.640 7309.785 - 7360.197: 0.9921% ( 9) 00:07:47.640 7360.197 - 7410.609: 1.0747% ( 10) 00:07:47.640 7410.609 - 7461.022: 1.1326% ( 7) 00:07:47.640 7461.022 - 7511.434: 1.1987% ( 8) 00:07:47.640 7511.434 - 7561.846: 1.2566% ( 7) 00:07:47.640 7561.846 - 7612.258: 1.3062% ( 6) 00:07:47.640 7612.258 - 7662.671: 1.3641% ( 7) 00:07:47.640 7662.671 - 7713.083: 1.4137% ( 6) 00:07:47.640 7713.083 - 7763.495: 1.4798% ( 8) 00:07:47.640 7763.495 - 7813.908: 1.5294% ( 6) 00:07:47.640 7813.908 - 7864.320: 1.5956% ( 8) 00:07:47.640 7864.320 - 7914.732: 1.8022% ( 25) 00:07:47.640 7914.732 - 7965.145: 2.0255% ( 27) 00:07:47.640 7965.145 - 8015.557: 2.1908% ( 20) 00:07:47.640 8015.557 - 8065.969: 2.4223% ( 28) 00:07:47.640 8065.969 - 8116.382: 2.6455% ( 27) 00:07:47.640 8116.382 - 8166.794: 2.8853% ( 29) 00:07:47.640 8166.794 - 8217.206: 3.1663% ( 34) 00:07:47.640 8217.206 - 8267.618: 3.6128% ( 54) 00:07:47.640 8267.618 - 8318.031: 4.2163% ( 73) 00:07:47.640 8318.031 - 8368.443: 5.0182% ( 97) 00:07:47.640 8368.443 - 8418.855: 5.8614% ( 102) 00:07:47.640 8418.855 - 8469.268: 6.7378% ( 106) 00:07:47.640 8469.268 - 8519.680: 7.7133% ( 118) 00:07:47.640 8519.680 - 8570.092: 8.8046% ( 132) 00:07:47.640 8570.092 - 8620.505: 9.7470% ( 114) 00:07:47.640 8620.505 - 8670.917: 10.8218% ( 130) 00:07:47.640 8670.917 - 8721.329: 12.0370% ( 147) 00:07:47.640 8721.329 - 8771.742: 13.5582% ( 184) 00:07:47.640 8771.742 - 8822.154: 14.9636% ( 170) 00:07:47.640 8822.154 - 8872.566: 16.1872% ( 148) 00:07:47.640 8872.566 - 8922.978: 17.4686% ( 155) 00:07:47.640 8922.978 - 8973.391: 18.7665% ( 157) 00:07:47.640 8973.391 - 9023.803: 20.1389% ( 166) 00:07:47.640 9023.803 - 9074.215: 21.5608% ( 172) 00:07:47.640 9074.215 - 9124.628: 22.9993% ( 174) 00:07:47.640 9124.628 - 9175.040: 24.3138% ( 159) 00:07:47.640 9175.040 - 9225.452: 25.6614% ( 163) 00:07:47.640 9225.452 - 9275.865: 27.1412% ( 179) 00:07:47.640 9275.865 - 9326.277: 28.5962% ( 176) 00:07:47.640 9326.277 - 9376.689: 29.9686% ( 166) 00:07:47.640 9376.689 - 9427.102: 31.1756% ( 146) 00:07:47.640 9427.102 - 9477.514: 32.5893% ( 171) 00:07:47.640 9477.514 - 9527.926: 33.9864% ( 169) 00:07:47.640 9527.926 - 9578.338: 35.2431% ( 152) 00:07:47.640 9578.338 - 9628.751: 36.4666% ( 148) 00:07:47.640 9628.751 - 9679.163: 38.0291% ( 189) 00:07:47.640 9679.163 - 9729.575: 39.3519% ( 160) 00:07:47.640 9729.575 - 9779.988: 40.6498% ( 157) 00:07:47.640 9779.988 - 9830.400: 41.9643% ( 159) 00:07:47.640 9830.400 - 9880.812: 43.1465% ( 143) 00:07:47.640 9880.812 - 9931.225: 44.3783% ( 149) 00:07:47.640 9931.225 - 9981.637: 45.5274% ( 139) 00:07:47.640 9981.637 - 10032.049: 46.7841% ( 152) 00:07:47.640 10032.049 - 10082.462: 48.0655% ( 155) 00:07:47.640 10082.462 - 10132.874: 49.4130% ( 163) 00:07:47.640 10132.874 - 10183.286: 50.7688% ( 164) 00:07:47.640 10183.286 - 10233.698: 52.0503% ( 155) 00:07:47.640 10233.698 - 10284.111: 53.3978% ( 163) 00:07:47.640 10284.111 - 10334.523: 54.7536% ( 164) 00:07:47.640 10334.523 - 10384.935: 56.0599% ( 158) 00:07:47.640 10384.935 - 10435.348: 57.2007% ( 138) 00:07:47.640 10435.348 - 10485.760: 58.3912% ( 144) 00:07:47.640 10485.760 - 10536.172: 59.4907% ( 133) 00:07:47.640 10536.172 - 10586.585: 60.5985% ( 134) 00:07:47.640 10586.585 - 10636.997: 61.5906% ( 120) 00:07:47.640 10636.997 - 10687.409: 62.7149% ( 136) 00:07:47.640 10687.409 - 10737.822: 63.7566% ( 126) 00:07:47.640 10737.822 - 10788.234: 64.7652% ( 122) 00:07:47.640 10788.234 - 10838.646: 65.6746% ( 110) 00:07:47.640 10838.646 - 10889.058: 66.7411% ( 129) 00:07:47.640 10889.058 - 10939.471: 67.9977% ( 152) 00:07:47.640 10939.471 - 10989.883: 68.9897% ( 120) 00:07:47.640 10989.883 - 11040.295: 69.9735% ( 119) 00:07:47.640 11040.295 - 11090.708: 70.7424% ( 93) 00:07:47.640 11090.708 - 11141.120: 71.4286% ( 83) 00:07:47.640 11141.120 - 11191.532: 72.1230% ( 84) 00:07:47.640 11191.532 - 11241.945: 72.8175% ( 84) 00:07:47.640 11241.945 - 11292.357: 73.5284% ( 86) 00:07:47.640 11292.357 - 11342.769: 74.2229% ( 84) 00:07:47.641 11342.769 - 11393.182: 74.8595% ( 77) 00:07:47.641 11393.182 - 11443.594: 75.5126% ( 79) 00:07:47.641 11443.594 - 11494.006: 76.2235% ( 86) 00:07:47.641 11494.006 - 11544.418: 76.9015% ( 82) 00:07:47.641 11544.418 - 11594.831: 77.6207% ( 87) 00:07:47.641 11594.831 - 11645.243: 78.3234% ( 85) 00:07:47.641 11645.243 - 11695.655: 78.8856% ( 68) 00:07:47.641 11695.655 - 11746.068: 79.4395% ( 67) 00:07:47.641 11746.068 - 11796.480: 80.0017% ( 68) 00:07:47.641 11796.480 - 11846.892: 80.5142% ( 62) 00:07:47.641 11846.892 - 11897.305: 81.0433% ( 64) 00:07:47.641 11897.305 - 11947.717: 81.5559% ( 62) 00:07:47.641 11947.717 - 11998.129: 82.0602% ( 61) 00:07:47.641 11998.129 - 12048.542: 82.5479% ( 59) 00:07:47.641 12048.542 - 12098.954: 83.0771% ( 64) 00:07:47.641 12098.954 - 12149.366: 83.5565% ( 58) 00:07:47.641 12149.366 - 12199.778: 83.9534% ( 48) 00:07:47.641 12199.778 - 12250.191: 84.3750% ( 51) 00:07:47.641 12250.191 - 12300.603: 84.8049% ( 52) 00:07:47.641 12300.603 - 12351.015: 85.2513% ( 54) 00:07:47.641 12351.015 - 12401.428: 85.6564% ( 49) 00:07:47.641 12401.428 - 12451.840: 85.9954% ( 41) 00:07:47.641 12451.840 - 12502.252: 86.3261% ( 40) 00:07:47.641 12502.252 - 12552.665: 86.6402% ( 38) 00:07:47.641 12552.665 - 12603.077: 86.8717% ( 28) 00:07:47.641 12603.077 - 12653.489: 87.1445% ( 33) 00:07:47.641 12653.489 - 12703.902: 87.4339% ( 35) 00:07:47.641 12703.902 - 12754.314: 87.7646% ( 40) 00:07:47.641 12754.314 - 12804.726: 88.0622% ( 36) 00:07:47.641 12804.726 - 12855.138: 88.3598% ( 36) 00:07:47.641 12855.138 - 12905.551: 88.6574% ( 36) 00:07:47.641 12905.551 - 13006.375: 89.2857% ( 76) 00:07:47.641 13006.375 - 13107.200: 89.8975% ( 74) 00:07:47.641 13107.200 - 13208.025: 90.6002% ( 85) 00:07:47.641 13208.025 - 13308.849: 91.2120% ( 74) 00:07:47.641 13308.849 - 13409.674: 91.7163% ( 61) 00:07:47.641 13409.674 - 13510.498: 92.3280% ( 74) 00:07:47.641 13510.498 - 13611.323: 93.0142% ( 83) 00:07:47.641 13611.323 - 13712.148: 93.7004% ( 83) 00:07:47.641 13712.148 - 13812.972: 94.3700% ( 81) 00:07:47.641 13812.972 - 13913.797: 95.0562% ( 83) 00:07:47.641 13913.797 - 14014.622: 95.5357% ( 58) 00:07:47.641 14014.622 - 14115.446: 95.8747% ( 41) 00:07:47.641 14115.446 - 14216.271: 96.1888% ( 38) 00:07:47.641 14216.271 - 14317.095: 96.4699% ( 34) 00:07:47.641 14317.095 - 14417.920: 96.6931% ( 27) 00:07:47.641 14417.920 - 14518.745: 96.9329% ( 29) 00:07:47.641 14518.745 - 14619.569: 97.1726% ( 29) 00:07:47.641 14619.569 - 14720.394: 97.3545% ( 22) 00:07:47.641 14720.394 - 14821.218: 97.5529% ( 24) 00:07:47.641 14821.218 - 14922.043: 97.7017% ( 18) 00:07:47.641 14922.043 - 15022.868: 97.8588% ( 19) 00:07:47.641 15022.868 - 15123.692: 97.9993% ( 17) 00:07:47.641 15123.692 - 15224.517: 98.1068% ( 13) 00:07:47.641 15224.517 - 15325.342: 98.1729% ( 8) 00:07:47.641 15325.342 - 15426.166: 98.2722% ( 12) 00:07:47.641 15426.166 - 15526.991: 98.3796% ( 13) 00:07:47.641 15526.991 - 15627.815: 98.4871% ( 13) 00:07:47.641 15627.815 - 15728.640: 98.5863% ( 12) 00:07:47.641 15728.640 - 15829.465: 98.6855% ( 12) 00:07:47.641 15829.465 - 15930.289: 98.7434% ( 7) 00:07:47.641 15930.289 - 16031.114: 98.8013% ( 7) 00:07:47.641 16031.114 - 16131.938: 98.8591% ( 7) 00:07:47.641 16131.938 - 16232.763: 98.9087% ( 6) 00:07:47.641 16232.763 - 16333.588: 98.9418% ( 4) 00:07:47.641 18047.606 - 18148.431: 98.9666% ( 3) 00:07:47.641 18148.431 - 18249.255: 98.9914% ( 3) 00:07:47.641 18249.255 - 18350.080: 99.0245% ( 4) 00:07:47.641 18350.080 - 18450.905: 99.0575% ( 4) 00:07:47.641 18450.905 - 18551.729: 99.0906% ( 4) 00:07:47.641 18551.729 - 18652.554: 99.1237% ( 4) 00:07:47.641 18652.554 - 18753.378: 99.1485% ( 3) 00:07:47.641 18753.378 - 18854.203: 99.1815% ( 4) 00:07:47.641 18854.203 - 18955.028: 99.2146% ( 4) 00:07:47.641 18955.028 - 19055.852: 99.2394% ( 3) 00:07:47.641 19055.852 - 19156.677: 99.2725% ( 4) 00:07:47.641 19156.677 - 19257.502: 99.3056% ( 4) 00:07:47.641 19257.502 - 19358.326: 99.3386% ( 4) 00:07:47.641 19358.326 - 19459.151: 99.3717% ( 4) 00:07:47.641 19459.151 - 19559.975: 99.4048% ( 4) 00:07:47.641 19559.975 - 19660.800: 99.4296% ( 3) 00:07:47.641 19660.800 - 19761.625: 99.4626% ( 4) 00:07:47.641 19761.625 - 19862.449: 99.4709% ( 1) 00:07:47.641 25407.803 - 25508.628: 99.4957% ( 3) 00:07:47.641 25508.628 - 25609.452: 99.5288% ( 4) 00:07:47.641 25609.452 - 25710.277: 99.5618% ( 4) 00:07:47.641 25710.277 - 25811.102: 99.5949% ( 4) 00:07:47.641 25811.102 - 26012.751: 99.6528% ( 7) 00:07:47.641 26012.751 - 26214.400: 99.7106% ( 7) 00:07:47.641 26214.400 - 26416.049: 99.7768% ( 8) 00:07:47.641 26416.049 - 26617.698: 99.8347% ( 7) 00:07:47.641 26617.698 - 26819.348: 99.9008% ( 8) 00:07:47.641 26819.348 - 27020.997: 99.9587% ( 7) 00:07:47.641 27020.997 - 27222.646: 100.0000% ( 5) 00:07:47.641 00:07:47.641 23:09:19 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:48.577 Initializing NVMe Controllers 00:07:48.577 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.577 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.577 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.577 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.577 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:48.577 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:48.577 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:48.577 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:48.577 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:48.577 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:48.577 Initialization complete. Launching workers. 00:07:48.577 ======================================================== 00:07:48.577 Latency(us) 00:07:48.577 Device Information : IOPS MiB/s Average min max 00:07:48.577 PCIE (0000:00:13.0) NSID 1 from core 0: 15577.29 182.55 8229.65 5743.19 36150.31 00:07:48.577 PCIE (0000:00:10.0) NSID 1 from core 0: 15577.29 182.55 8215.02 5781.36 34839.13 00:07:48.577 PCIE (0000:00:11.0) NSID 1 from core 0: 15577.29 182.55 8200.12 5857.23 32865.60 00:07:48.577 PCIE (0000:00:12.0) NSID 1 from core 0: 15577.29 182.55 8185.95 5752.60 31993.11 00:07:48.577 PCIE (0000:00:12.0) NSID 2 from core 0: 15577.29 182.55 8171.98 5866.01 30147.81 00:07:48.577 PCIE (0000:00:12.0) NSID 3 from core 0: 15641.13 183.29 8124.49 5857.34 22940.32 00:07:48.577 ======================================================== 00:07:48.577 Total : 93527.58 1096.03 8187.83 5743.19 36150.31 00:07:48.577 00:07:48.577 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:48.577 ================================================================================= 00:07:48.577 1.00000% : 6175.508us 00:07:48.577 10.00000% : 6402.363us 00:07:48.577 25.00000% : 6604.012us 00:07:48.577 50.00000% : 6956.898us 00:07:48.577 75.00000% : 9175.040us 00:07:48.577 90.00000% : 11796.480us 00:07:48.577 95.00000% : 13510.498us 00:07:48.577 98.00000% : 14216.271us 00:07:48.577 99.00000% : 14922.043us 00:07:48.577 99.50000% : 30449.034us 00:07:48.577 99.90000% : 35893.563us 00:07:48.577 99.99000% : 36296.862us 00:07:48.577 99.99900% : 36296.862us 00:07:48.577 99.99990% : 36296.862us 00:07:48.577 99.99999% : 36296.862us 00:07:48.577 00:07:48.577 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:48.577 ================================================================================= 00:07:48.577 1.00000% : 6049.477us 00:07:48.577 10.00000% : 6351.951us 00:07:48.577 25.00000% : 6604.012us 00:07:48.577 50.00000% : 7057.723us 00:07:48.577 75.00000% : 9124.628us 00:07:48.577 90.00000% : 11846.892us 00:07:48.577 95.00000% : 13409.674us 00:07:48.577 98.00000% : 14216.271us 00:07:48.577 99.00000% : 14619.569us 00:07:48.577 99.50000% : 28029.243us 00:07:48.577 99.90000% : 34482.018us 00:07:48.577 99.99000% : 34885.317us 00:07:48.577 99.99900% : 34885.317us 00:07:48.577 99.99990% : 34885.317us 00:07:48.577 99.99999% : 34885.317us 00:07:48.578 00:07:48.578 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.578 ================================================================================= 00:07:48.578 1.00000% : 6225.920us 00:07:48.578 10.00000% : 6377.157us 00:07:48.578 25.00000% : 6604.012us 00:07:48.578 50.00000% : 6956.898us 00:07:48.578 75.00000% : 9175.040us 00:07:48.578 90.00000% : 11897.305us 00:07:48.578 95.00000% : 13308.849us 00:07:48.578 98.00000% : 14115.446us 00:07:48.578 99.00000% : 14619.569us 00:07:48.578 99.50000% : 26214.400us 00:07:48.578 99.90000% : 32667.175us 00:07:48.578 99.99000% : 32868.825us 00:07:48.578 99.99900% : 32868.825us 00:07:48.578 99.99990% : 32868.825us 00:07:48.578 99.99999% : 32868.825us 00:07:48.578 00:07:48.578 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.578 ================================================================================= 00:07:48.578 1.00000% : 6175.508us 00:07:48.578 10.00000% : 6377.157us 00:07:48.578 25.00000% : 6604.012us 00:07:48.578 50.00000% : 6956.898us 00:07:48.578 75.00000% : 9175.040us 00:07:48.578 90.00000% : 12149.366us 00:07:48.578 95.00000% : 13208.025us 00:07:48.578 98.00000% : 14216.271us 00:07:48.578 99.00000% : 15224.517us 00:07:48.578 99.50000% : 25105.329us 00:07:48.578 99.90000% : 31658.929us 00:07:48.578 99.99000% : 32062.228us 00:07:48.578 99.99900% : 32062.228us 00:07:48.578 99.99990% : 32062.228us 00:07:48.578 99.99999% : 32062.228us 00:07:48.578 00:07:48.578 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.578 ================================================================================= 00:07:48.578 1.00000% : 6200.714us 00:07:48.578 10.00000% : 6377.157us 00:07:48.578 25.00000% : 6604.012us 00:07:48.578 50.00000% : 6956.898us 00:07:48.578 75.00000% : 9175.040us 00:07:48.578 90.00000% : 12098.954us 00:07:48.578 95.00000% : 13107.200us 00:07:48.578 98.00000% : 14216.271us 00:07:48.578 99.00000% : 15224.517us 00:07:48.578 99.50000% : 23290.486us 00:07:48.578 99.90000% : 29844.086us 00:07:48.578 99.99000% : 30247.385us 00:07:48.578 99.99900% : 30247.385us 00:07:48.578 99.99990% : 30247.385us 00:07:48.578 99.99999% : 30247.385us 00:07:48.578 00:07:48.578 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.578 ================================================================================= 00:07:48.578 1.00000% : 6175.508us 00:07:48.578 10.00000% : 6377.157us 00:07:48.578 25.00000% : 6604.012us 00:07:48.578 50.00000% : 6956.898us 00:07:48.578 75.00000% : 9225.452us 00:07:48.578 90.00000% : 11897.305us 00:07:48.578 95.00000% : 13510.498us 00:07:48.578 98.00000% : 14216.271us 00:07:48.578 99.00000% : 15022.868us 00:07:48.578 99.50000% : 16636.062us 00:07:48.578 99.90000% : 22584.714us 00:07:48.578 99.99000% : 22988.012us 00:07:48.578 99.99900% : 22988.012us 00:07:48.578 99.99990% : 22988.012us 00:07:48.578 99.99999% : 22988.012us 00:07:48.578 00:07:48.578 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:48.578 ============================================================================== 00:07:48.578 Range in us Cumulative IO count 00:07:48.578 5721.797 - 5747.003: 0.0064% ( 1) 00:07:48.578 5822.622 - 5847.828: 0.0128% ( 1) 00:07:48.578 5847.828 - 5873.034: 0.0192% ( 1) 00:07:48.578 5873.034 - 5898.240: 0.0256% ( 1) 00:07:48.578 5898.240 - 5923.446: 0.0384% ( 2) 00:07:48.578 5923.446 - 5948.652: 0.0448% ( 1) 00:07:48.578 5973.858 - 5999.065: 0.0512% ( 1) 00:07:48.578 5999.065 - 6024.271: 0.1025% ( 8) 00:07:48.578 6024.271 - 6049.477: 0.1217% ( 3) 00:07:48.578 6049.477 - 6074.683: 0.1857% ( 10) 00:07:48.578 6074.683 - 6099.889: 0.2882% ( 16) 00:07:48.578 6099.889 - 6125.095: 0.4675% ( 28) 00:07:48.578 6125.095 - 6150.302: 0.6660% ( 31) 00:07:48.578 6150.302 - 6175.508: 1.0054% ( 53) 00:07:48.578 6175.508 - 6200.714: 1.5753% ( 89) 00:07:48.578 6200.714 - 6225.920: 2.0812% ( 79) 00:07:48.578 6225.920 - 6251.126: 2.7280% ( 101) 00:07:48.578 6251.126 - 6276.332: 3.8038% ( 168) 00:07:48.578 6276.332 - 6301.538: 4.8988% ( 171) 00:07:48.578 6301.538 - 6326.745: 6.3012% ( 219) 00:07:48.578 6326.745 - 6351.951: 8.4913% ( 342) 00:07:48.578 6351.951 - 6377.157: 9.8425% ( 211) 00:07:48.578 6377.157 - 6402.363: 11.1552% ( 205) 00:07:48.578 6402.363 - 6427.569: 13.2044% ( 320) 00:07:48.578 6427.569 - 6452.775: 14.9974% ( 280) 00:07:48.578 6452.775 - 6503.188: 18.2121% ( 502) 00:07:48.578 6503.188 - 6553.600: 22.5858% ( 683) 00:07:48.578 6553.600 - 6604.012: 26.0822% ( 546) 00:07:48.578 6604.012 - 6654.425: 29.2136% ( 489) 00:07:48.578 6654.425 - 6704.837: 32.4987% ( 513) 00:07:48.578 6704.837 - 6755.249: 36.0272% ( 551) 00:07:48.578 6755.249 - 6805.662: 40.7018% ( 730) 00:07:48.578 6805.662 - 6856.074: 44.7618% ( 634) 00:07:48.578 6856.074 - 6906.486: 48.9050% ( 647) 00:07:48.578 6906.486 - 6956.898: 51.4793% ( 402) 00:07:48.578 6956.898 - 7007.311: 53.2467% ( 276) 00:07:48.578 7007.311 - 7057.723: 55.2446% ( 312) 00:07:48.578 7057.723 - 7108.135: 56.8904% ( 257) 00:07:48.578 7108.135 - 7158.548: 58.3312% ( 225) 00:07:48.578 7158.548 - 7208.960: 59.8553% ( 238) 00:07:48.578 7208.960 - 7259.372: 60.7326% ( 137) 00:07:48.578 7259.372 - 7309.785: 61.3794% ( 101) 00:07:48.578 7309.785 - 7360.197: 62.2823% ( 141) 00:07:48.578 7360.197 - 7410.609: 62.8586% ( 90) 00:07:48.578 7410.609 - 7461.022: 63.4798% ( 97) 00:07:48.578 7461.022 - 7511.434: 63.9793% ( 78) 00:07:48.578 7511.434 - 7561.846: 64.3955% ( 65) 00:07:48.578 7561.846 - 7612.258: 64.7349% ( 53) 00:07:48.578 7612.258 - 7662.671: 65.2152% ( 75) 00:07:48.578 7662.671 - 7713.083: 65.4777% ( 41) 00:07:48.578 7713.083 - 7763.495: 65.7659% ( 45) 00:07:48.578 7763.495 - 7813.908: 66.2141% ( 70) 00:07:48.578 7813.908 - 7864.320: 66.6304% ( 65) 00:07:48.578 7864.320 - 7914.732: 66.9057% ( 43) 00:07:48.578 7914.732 - 7965.145: 67.1427% ( 37) 00:07:48.578 7965.145 - 8015.557: 67.3924% ( 39) 00:07:48.578 8015.557 - 8065.969: 67.6934% ( 47) 00:07:48.578 8065.969 - 8116.382: 67.9559% ( 41) 00:07:48.578 8116.382 - 8166.794: 68.2953% ( 53) 00:07:48.578 8166.794 - 8217.206: 68.8012% ( 79) 00:07:48.578 8217.206 - 8267.618: 69.1534% ( 55) 00:07:48.578 8267.618 - 8318.031: 69.4160% ( 41) 00:07:48.578 8318.031 - 8368.443: 69.6913% ( 43) 00:07:48.578 8368.443 - 8418.855: 70.0628% ( 58) 00:07:48.578 8418.855 - 8469.268: 70.4214% ( 56) 00:07:48.578 8469.268 - 8519.680: 70.9913% ( 89) 00:07:48.578 8519.680 - 8570.092: 71.2538% ( 41) 00:07:48.578 8570.092 - 8620.505: 71.5868% ( 52) 00:07:48.578 8620.505 - 8670.917: 71.9006% ( 49) 00:07:48.578 8670.917 - 8721.329: 72.1376% ( 37) 00:07:48.578 8721.329 - 8771.742: 72.3233% ( 29) 00:07:48.578 8771.742 - 8822.154: 72.5346% ( 33) 00:07:48.578 8822.154 - 8872.566: 72.7459% ( 33) 00:07:48.578 8872.566 - 8922.978: 73.0341% ( 45) 00:07:48.578 8922.978 - 8973.391: 73.2390% ( 32) 00:07:48.578 8973.391 - 9023.803: 73.4887% ( 39) 00:07:48.578 9023.803 - 9074.215: 73.9178% ( 67) 00:07:48.578 9074.215 - 9124.628: 74.4173% ( 78) 00:07:48.578 9124.628 - 9175.040: 75.0512% ( 99) 00:07:48.578 9175.040 - 9225.452: 75.4995% ( 70) 00:07:48.578 9225.452 - 9275.865: 76.0502% ( 86) 00:07:48.578 9275.865 - 9326.277: 76.7226% ( 105) 00:07:48.578 9326.277 - 9376.689: 77.2989% ( 90) 00:07:48.578 9376.689 - 9427.102: 77.9393% ( 100) 00:07:48.578 9427.102 - 9477.514: 78.5476% ( 95) 00:07:48.578 9477.514 - 9527.926: 79.0535% ( 79) 00:07:48.578 9527.926 - 9578.338: 79.6235% ( 89) 00:07:48.578 9578.338 - 9628.751: 79.9693% ( 54) 00:07:48.578 9628.751 - 9679.163: 80.3919% ( 66) 00:07:48.578 9679.163 - 9729.575: 80.6416% ( 39) 00:07:48.578 9729.575 - 9779.988: 80.8338% ( 30) 00:07:48.578 9779.988 - 9830.400: 80.9939% ( 25) 00:07:48.578 9830.400 - 9880.812: 81.2308% ( 37) 00:07:48.578 9880.812 - 9931.225: 81.6150% ( 60) 00:07:48.578 9931.225 - 9981.637: 81.8712% ( 40) 00:07:48.578 9981.637 - 10032.049: 82.0505% ( 28) 00:07:48.578 10032.049 - 10082.462: 82.4027% ( 55) 00:07:48.578 10082.462 - 10132.874: 82.5948% ( 30) 00:07:48.578 10132.874 - 10183.286: 82.7869% ( 30) 00:07:48.578 10183.286 - 10233.698: 82.9918% ( 32) 00:07:48.578 10233.698 - 10284.111: 83.2864% ( 46) 00:07:48.578 10284.111 - 10334.523: 83.4849% ( 31) 00:07:48.578 10334.523 - 10384.935: 83.7410% ( 40) 00:07:48.578 10384.935 - 10435.348: 83.9267% ( 29) 00:07:48.578 10435.348 - 10485.760: 84.1253% ( 31) 00:07:48.578 10485.760 - 10536.172: 84.2918% ( 26) 00:07:48.578 10536.172 - 10586.585: 84.4839% ( 30) 00:07:48.578 10586.585 - 10636.997: 84.7720% ( 45) 00:07:48.578 10636.997 - 10687.409: 85.0794% ( 48) 00:07:48.578 10687.409 - 10737.822: 85.3356% ( 40) 00:07:48.578 10737.822 - 10788.234: 85.4956% ( 25) 00:07:48.578 10788.234 - 10838.646: 85.6621% ( 26) 00:07:48.578 10838.646 - 10889.058: 85.8799% ( 34) 00:07:48.578 10889.058 - 10939.471: 86.1168% ( 37) 00:07:48.578 10939.471 - 10989.883: 86.3089% ( 30) 00:07:48.578 10989.883 - 11040.295: 86.4626% ( 24) 00:07:48.578 11040.295 - 11090.708: 86.5779% ( 18) 00:07:48.578 11090.708 - 11141.120: 86.6483% ( 11) 00:07:48.578 11141.120 - 11191.532: 86.7572% ( 17) 00:07:48.578 11191.532 - 11241.945: 86.8660% ( 17) 00:07:48.578 11241.945 - 11292.357: 87.0005% ( 21) 00:07:48.578 11292.357 - 11342.769: 87.2246% ( 35) 00:07:48.578 11342.769 - 11393.182: 87.4680% ( 38) 00:07:48.578 11393.182 - 11443.594: 87.6473% ( 28) 00:07:48.578 11443.594 - 11494.006: 87.9226% ( 43) 00:07:48.578 11494.006 - 11544.418: 88.2172% ( 46) 00:07:48.578 11544.418 - 11594.831: 88.5310% ( 49) 00:07:48.578 11594.831 - 11645.243: 89.1265% ( 93) 00:07:48.579 11645.243 - 11695.655: 89.3571% ( 36) 00:07:48.579 11695.655 - 11746.068: 89.6773% ( 50) 00:07:48.579 11746.068 - 11796.480: 90.1575% ( 75) 00:07:48.579 11796.480 - 11846.892: 90.3945% ( 37) 00:07:48.579 11846.892 - 11897.305: 90.5802% ( 29) 00:07:48.579 11897.305 - 11947.717: 90.8107% ( 36) 00:07:48.579 11947.717 - 11998.129: 91.1053% ( 46) 00:07:48.579 11998.129 - 12048.542: 91.3806% ( 43) 00:07:48.579 12048.542 - 12098.954: 91.7328% ( 55) 00:07:48.579 12098.954 - 12149.366: 91.9249% ( 30) 00:07:48.579 12149.366 - 12199.778: 92.0530% ( 20) 00:07:48.579 12199.778 - 12250.191: 92.1939% ( 22) 00:07:48.579 12250.191 - 12300.603: 92.3348% ( 22) 00:07:48.579 12300.603 - 12351.015: 92.4629% ( 20) 00:07:48.579 12351.015 - 12401.428: 92.5909% ( 20) 00:07:48.579 12401.428 - 12451.840: 92.6934% ( 16) 00:07:48.579 12451.840 - 12502.252: 92.7574% ( 10) 00:07:48.579 12502.252 - 12552.665: 92.8087% ( 8) 00:07:48.579 12552.665 - 12603.077: 92.8727% ( 10) 00:07:48.579 12603.077 - 12653.489: 92.9367% ( 10) 00:07:48.579 12653.489 - 12703.902: 93.0456% ( 17) 00:07:48.579 12703.902 - 12754.314: 93.1865% ( 22) 00:07:48.579 12754.314 - 12804.726: 93.3594% ( 27) 00:07:48.579 12804.726 - 12855.138: 93.4746% ( 18) 00:07:48.579 12855.138 - 12905.551: 93.6027% ( 20) 00:07:48.579 12905.551 - 13006.375: 93.8461% ( 38) 00:07:48.579 13006.375 - 13107.200: 94.1726% ( 51) 00:07:48.579 13107.200 - 13208.025: 94.3455% ( 27) 00:07:48.579 13208.025 - 13308.849: 94.5633% ( 34) 00:07:48.579 13308.849 - 13409.674: 94.8002% ( 37) 00:07:48.579 13409.674 - 13510.498: 95.0820% ( 44) 00:07:48.579 13510.498 - 13611.323: 95.5815% ( 78) 00:07:48.579 13611.323 - 13712.148: 95.9721% ( 61) 00:07:48.579 13712.148 - 13812.972: 96.3819% ( 64) 00:07:48.579 13812.972 - 13913.797: 96.7789% ( 62) 00:07:48.579 13913.797 - 14014.622: 97.1824% ( 63) 00:07:48.579 14014.622 - 14115.446: 97.5154% ( 52) 00:07:48.579 14115.446 - 14216.271: 98.0469% ( 83) 00:07:48.579 14216.271 - 14317.095: 98.2710% ( 35) 00:07:48.579 14317.095 - 14417.920: 98.4759% ( 32) 00:07:48.579 14417.920 - 14518.745: 98.6872% ( 33) 00:07:48.579 14518.745 - 14619.569: 98.8153% ( 20) 00:07:48.579 14619.569 - 14720.394: 98.8858% ( 11) 00:07:48.579 14720.394 - 14821.218: 98.9498% ( 10) 00:07:48.579 14821.218 - 14922.043: 99.0074% ( 9) 00:07:48.579 14922.043 - 15022.868: 99.0587% ( 8) 00:07:48.579 15022.868 - 15123.692: 99.1291% ( 11) 00:07:48.579 15123.692 - 15224.517: 99.1675% ( 6) 00:07:48.579 15224.517 - 15325.342: 99.1803% ( 2) 00:07:48.579 29037.489 - 29239.138: 99.1867% ( 1) 00:07:48.579 29239.138 - 29440.788: 99.2380% ( 8) 00:07:48.579 29440.788 - 29642.437: 99.3212% ( 13) 00:07:48.579 29642.437 - 29844.086: 99.3724% ( 8) 00:07:48.579 29844.086 - 30045.735: 99.4173% ( 7) 00:07:48.579 30045.735 - 30247.385: 99.4685% ( 8) 00:07:48.579 30247.385 - 30449.034: 99.5197% ( 8) 00:07:48.579 30449.034 - 30650.683: 99.5645% ( 7) 00:07:48.579 30650.683 - 30852.332: 99.5902% ( 4) 00:07:48.579 33877.071 - 34078.720: 99.6350% ( 7) 00:07:48.579 34078.720 - 34280.369: 99.7951% ( 25) 00:07:48.579 35288.615 - 35490.265: 99.8463% ( 8) 00:07:48.579 35490.265 - 35691.914: 99.8911% ( 7) 00:07:48.579 35691.914 - 35893.563: 99.9424% ( 8) 00:07:48.579 35893.563 - 36095.212: 99.9872% ( 7) 00:07:48.579 36095.212 - 36296.862: 100.0000% ( 2) 00:07:48.579 00:07:48.579 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:48.579 ============================================================================== 00:07:48.579 Range in us Cumulative IO count 00:07:48.579 5772.209 - 5797.415: 0.0064% ( 1) 00:07:48.579 5797.415 - 5822.622: 0.0320% ( 4) 00:07:48.579 5822.622 - 5847.828: 0.0704% ( 6) 00:07:48.579 5847.828 - 5873.034: 0.1473% ( 12) 00:07:48.579 5873.034 - 5898.240: 0.1921% ( 7) 00:07:48.579 5898.240 - 5923.446: 0.2818% ( 14) 00:07:48.579 5923.446 - 5948.652: 0.4098% ( 20) 00:07:48.579 5948.652 - 5973.858: 0.5187% ( 17) 00:07:48.579 5973.858 - 5999.065: 0.6468% ( 20) 00:07:48.579 5999.065 - 6024.271: 0.8773% ( 36) 00:07:48.579 6024.271 - 6049.477: 1.1399% ( 41) 00:07:48.579 6049.477 - 6074.683: 1.4600% ( 50) 00:07:48.579 6074.683 - 6099.889: 1.9467% ( 76) 00:07:48.579 6099.889 - 6125.095: 2.5166% ( 89) 00:07:48.579 6125.095 - 6150.302: 3.0418% ( 82) 00:07:48.579 6150.302 - 6175.508: 3.7077% ( 104) 00:07:48.579 6175.508 - 6200.714: 4.5018% ( 124) 00:07:48.579 6200.714 - 6225.920: 5.3023% ( 125) 00:07:48.579 6225.920 - 6251.126: 6.1283% ( 129) 00:07:48.579 6251.126 - 6276.332: 6.9480% ( 128) 00:07:48.579 6276.332 - 6301.538: 8.1391% ( 186) 00:07:48.579 6301.538 - 6326.745: 9.0932% ( 149) 00:07:48.579 6326.745 - 6351.951: 10.1947% ( 172) 00:07:48.579 6351.951 - 6377.157: 11.2065% ( 158) 00:07:48.579 6377.157 - 6402.363: 12.3847% ( 184) 00:07:48.579 6402.363 - 6427.569: 13.7487% ( 213) 00:07:48.579 6427.569 - 6452.775: 15.4073% ( 259) 00:07:48.579 6452.775 - 6503.188: 19.2239% ( 596) 00:07:48.579 6503.188 - 6553.600: 23.2710% ( 632) 00:07:48.579 6553.600 - 6604.012: 27.3886% ( 643) 00:07:48.579 6604.012 - 6654.425: 31.3204% ( 614) 00:07:48.579 6654.425 - 6704.837: 34.7720% ( 539) 00:07:48.579 6704.837 - 6755.249: 37.9611% ( 498) 00:07:48.579 6755.249 - 6805.662: 41.0412% ( 481) 00:07:48.579 6805.662 - 6856.074: 43.6796% ( 412) 00:07:48.579 6856.074 - 6906.486: 45.6071% ( 301) 00:07:48.579 6906.486 - 6956.898: 47.5346% ( 301) 00:07:48.579 6956.898 - 7007.311: 49.6158% ( 325) 00:07:48.579 7007.311 - 7057.723: 52.0940% ( 387) 00:07:48.579 7057.723 - 7108.135: 54.3353% ( 350) 00:07:48.579 7108.135 - 7158.548: 56.1860% ( 289) 00:07:48.579 7158.548 - 7208.960: 57.8893% ( 266) 00:07:48.579 7208.960 - 7259.372: 59.5287% ( 256) 00:07:48.579 7259.372 - 7309.785: 60.5853% ( 165) 00:07:48.579 7309.785 - 7360.197: 61.3730% ( 123) 00:07:48.579 7360.197 - 7410.609: 62.2374% ( 135) 00:07:48.579 7410.609 - 7461.022: 62.9226% ( 107) 00:07:48.579 7461.022 - 7511.434: 63.5182% ( 93) 00:07:48.579 7511.434 - 7561.846: 63.9408% ( 66) 00:07:48.579 7561.846 - 7612.258: 64.4467% ( 79) 00:07:48.579 7612.258 - 7662.671: 64.9526% ( 79) 00:07:48.579 7662.671 - 7713.083: 65.2856% ( 52) 00:07:48.579 7713.083 - 7763.495: 65.5994% ( 49) 00:07:48.579 7763.495 - 7813.908: 65.8683% ( 42) 00:07:48.579 7813.908 - 7864.320: 66.1949% ( 51) 00:07:48.579 7864.320 - 7914.732: 66.5856% ( 61) 00:07:48.579 7914.732 - 7965.145: 66.7072% ( 19) 00:07:48.579 7965.145 - 8015.557: 66.8417% ( 21) 00:07:48.579 8015.557 - 8065.969: 67.2323% ( 61) 00:07:48.579 8065.969 - 8116.382: 67.6422% ( 64) 00:07:48.579 8116.382 - 8166.794: 68.0904% ( 70) 00:07:48.579 8166.794 - 8217.206: 68.3081% ( 34) 00:07:48.579 8217.206 - 8267.618: 68.5003% ( 30) 00:07:48.579 8267.618 - 8318.031: 68.7308% ( 36) 00:07:48.579 8318.031 - 8368.443: 69.0190% ( 45) 00:07:48.579 8368.443 - 8418.855: 69.3648% ( 54) 00:07:48.579 8418.855 - 8469.268: 69.8066% ( 69) 00:07:48.579 8469.268 - 8519.680: 70.1268% ( 50) 00:07:48.579 8519.680 - 8570.092: 70.5366% ( 64) 00:07:48.579 8570.092 - 8620.505: 71.1834% ( 101) 00:07:48.579 8620.505 - 8670.917: 71.6381% ( 71) 00:07:48.579 8670.917 - 8721.329: 71.9903% ( 55) 00:07:48.579 8721.329 - 8771.742: 72.3617% ( 58) 00:07:48.579 8771.742 - 8822.154: 72.7715% ( 64) 00:07:48.579 8822.154 - 8872.566: 73.2582% ( 76) 00:07:48.579 8872.566 - 8922.978: 73.7129% ( 71) 00:07:48.579 8922.978 - 8973.391: 74.0907% ( 59) 00:07:48.579 8973.391 - 9023.803: 74.5645% ( 74) 00:07:48.579 9023.803 - 9074.215: 74.8655% ( 47) 00:07:48.579 9074.215 - 9124.628: 75.2433% ( 59) 00:07:48.579 9124.628 - 9175.040: 75.6660% ( 66) 00:07:48.579 9175.040 - 9225.452: 75.9862% ( 50) 00:07:48.579 9225.452 - 9275.865: 76.4088% ( 66) 00:07:48.579 9275.865 - 9326.277: 76.8699% ( 72) 00:07:48.579 9326.277 - 9376.689: 77.5102% ( 100) 00:07:48.579 9376.689 - 9427.102: 77.9457% ( 68) 00:07:48.579 9427.102 - 9477.514: 78.3043% ( 56) 00:07:48.579 9477.514 - 9527.926: 78.6949% ( 61) 00:07:48.579 9527.926 - 9578.338: 79.0791% ( 60) 00:07:48.579 9578.338 - 9628.751: 79.4634% ( 60) 00:07:48.579 9628.751 - 9679.163: 79.7387% ( 43) 00:07:48.579 9679.163 - 9729.575: 80.0397% ( 47) 00:07:48.579 9729.575 - 9779.988: 80.3087% ( 42) 00:07:48.579 9779.988 - 9830.400: 80.6993% ( 61) 00:07:48.579 9830.400 - 9880.812: 81.0835% ( 60) 00:07:48.579 9880.812 - 9931.225: 81.4037% ( 50) 00:07:48.579 9931.225 - 9981.637: 81.6534% ( 39) 00:07:48.579 9981.637 - 10032.049: 81.8584% ( 32) 00:07:48.579 10032.049 - 10082.462: 82.0953% ( 37) 00:07:48.579 10082.462 - 10132.874: 82.2874% ( 30) 00:07:48.579 10132.874 - 10183.286: 82.4795% ( 30) 00:07:48.579 10183.286 - 10233.698: 82.6972% ( 34) 00:07:48.579 10233.698 - 10284.111: 82.9278% ( 36) 00:07:48.579 10284.111 - 10334.523: 83.0879% ( 25) 00:07:48.579 10334.523 - 10384.935: 83.2159% ( 20) 00:07:48.579 10384.935 - 10435.348: 83.4144% ( 31) 00:07:48.579 10435.348 - 10485.760: 83.5681% ( 24) 00:07:48.579 10485.760 - 10536.172: 83.7731% ( 32) 00:07:48.579 10536.172 - 10586.585: 83.9780% ( 32) 00:07:48.579 10586.585 - 10636.997: 84.2918% ( 49) 00:07:48.579 10636.997 - 10687.409: 84.5671% ( 43) 00:07:48.579 10687.409 - 10737.822: 84.7720% ( 32) 00:07:48.579 10737.822 - 10788.234: 85.1819% ( 64) 00:07:48.579 10788.234 - 10838.646: 85.4764% ( 46) 00:07:48.579 10838.646 - 10889.058: 85.7454% ( 42) 00:07:48.579 10889.058 - 10939.471: 86.0207% ( 43) 00:07:48.579 10939.471 - 10989.883: 86.2065% ( 29) 00:07:48.579 10989.883 - 11040.295: 86.4242% ( 34) 00:07:48.580 11040.295 - 11090.708: 86.7252% ( 47) 00:07:48.580 11090.708 - 11141.120: 86.8596% ( 21) 00:07:48.580 11141.120 - 11191.532: 87.1158% ( 40) 00:07:48.580 11191.532 - 11241.945: 87.3655% ( 39) 00:07:48.580 11241.945 - 11292.357: 87.5897% ( 35) 00:07:48.580 11292.357 - 11342.769: 87.8394% ( 39) 00:07:48.580 11342.769 - 11393.182: 88.2620% ( 66) 00:07:48.580 11393.182 - 11443.594: 88.4349% ( 27) 00:07:48.580 11443.594 - 11494.006: 88.6719% ( 37) 00:07:48.580 11494.006 - 11544.418: 88.8192% ( 23) 00:07:48.580 11544.418 - 11594.831: 89.0817% ( 41) 00:07:48.580 11594.831 - 11645.243: 89.3315% ( 39) 00:07:48.580 11645.243 - 11695.655: 89.5108% ( 28) 00:07:48.580 11695.655 - 11746.068: 89.6837% ( 27) 00:07:48.580 11746.068 - 11796.480: 89.8373% ( 24) 00:07:48.580 11796.480 - 11846.892: 90.0551% ( 34) 00:07:48.580 11846.892 - 11897.305: 90.3432% ( 45) 00:07:48.580 11897.305 - 11947.717: 90.6058% ( 41) 00:07:48.580 11947.717 - 11998.129: 90.8811% ( 43) 00:07:48.580 11998.129 - 12048.542: 91.0220% ( 22) 00:07:48.580 12048.542 - 12098.954: 91.1821% ( 25) 00:07:48.580 12098.954 - 12149.366: 91.3742% ( 30) 00:07:48.580 12149.366 - 12199.778: 91.5215% ( 23) 00:07:48.580 12199.778 - 12250.191: 91.6624% ( 22) 00:07:48.580 12250.191 - 12300.603: 91.8097% ( 23) 00:07:48.580 12300.603 - 12351.015: 92.0210% ( 33) 00:07:48.580 12351.015 - 12401.428: 92.2131% ( 30) 00:07:48.580 12401.428 - 12451.840: 92.4052% ( 30) 00:07:48.580 12451.840 - 12502.252: 92.5717% ( 26) 00:07:48.580 12502.252 - 12552.665: 92.7510% ( 28) 00:07:48.580 12552.665 - 12603.077: 92.8919% ( 22) 00:07:48.580 12603.077 - 12653.489: 93.0008% ( 17) 00:07:48.580 12653.489 - 12703.902: 93.1416% ( 22) 00:07:48.580 12703.902 - 12754.314: 93.3210% ( 28) 00:07:48.580 12754.314 - 12804.726: 93.4810% ( 25) 00:07:48.580 12804.726 - 12855.138: 93.6027% ( 19) 00:07:48.580 12855.138 - 12905.551: 93.7244% ( 19) 00:07:48.580 12905.551 - 13006.375: 94.0061% ( 44) 00:07:48.580 13006.375 - 13107.200: 94.3391% ( 52) 00:07:48.580 13107.200 - 13208.025: 94.6145% ( 43) 00:07:48.580 13208.025 - 13308.849: 94.9859% ( 58) 00:07:48.580 13308.849 - 13409.674: 95.2805% ( 46) 00:07:48.580 13409.674 - 13510.498: 95.6071% ( 51) 00:07:48.580 13510.498 - 13611.323: 95.9273% ( 50) 00:07:48.580 13611.323 - 13712.148: 96.1834% ( 40) 00:07:48.580 13712.148 - 13812.972: 96.5804% ( 62) 00:07:48.580 13812.972 - 13913.797: 96.9582% ( 59) 00:07:48.580 13913.797 - 14014.622: 97.4001% ( 69) 00:07:48.580 14014.622 - 14115.446: 97.9316% ( 83) 00:07:48.580 14115.446 - 14216.271: 98.2390% ( 48) 00:07:48.580 14216.271 - 14317.095: 98.4695% ( 36) 00:07:48.580 14317.095 - 14417.920: 98.7065% ( 37) 00:07:48.580 14417.920 - 14518.745: 98.8665% ( 25) 00:07:48.580 14518.745 - 14619.569: 99.0074% ( 22) 00:07:48.580 14619.569 - 14720.394: 99.1227% ( 18) 00:07:48.580 14720.394 - 14821.218: 99.1803% ( 9) 00:07:48.580 26416.049 - 26617.698: 99.2059% ( 4) 00:07:48.580 26617.698 - 26819.348: 99.2508% ( 7) 00:07:48.580 26819.348 - 27020.997: 99.2956% ( 7) 00:07:48.580 27020.997 - 27222.646: 99.3340% ( 6) 00:07:48.580 27222.646 - 27424.295: 99.3788% ( 7) 00:07:48.580 27424.295 - 27625.945: 99.4173% ( 6) 00:07:48.580 27625.945 - 27827.594: 99.4557% ( 6) 00:07:48.580 27827.594 - 28029.243: 99.5069% ( 8) 00:07:48.580 28029.243 - 28230.892: 99.5453% ( 6) 00:07:48.580 28230.892 - 28432.542: 99.5902% ( 7) 00:07:48.580 32868.825 - 33070.474: 99.6286% ( 6) 00:07:48.580 33070.474 - 33272.123: 99.6798% ( 8) 00:07:48.580 33272.123 - 33473.772: 99.6990% ( 3) 00:07:48.580 33473.772 - 33675.422: 99.7439% ( 7) 00:07:48.580 33675.422 - 33877.071: 99.7887% ( 7) 00:07:48.580 33877.071 - 34078.720: 99.8335% ( 7) 00:07:48.580 34078.720 - 34280.369: 99.8719% ( 6) 00:07:48.580 34280.369 - 34482.018: 99.9168% ( 7) 00:07:48.580 34482.018 - 34683.668: 99.9616% ( 7) 00:07:48.580 34683.668 - 34885.317: 100.0000% ( 6) 00:07:48.580 00:07:48.580 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.580 ============================================================================== 00:07:48.580 Range in us Cumulative IO count 00:07:48.580 5847.828 - 5873.034: 0.0064% ( 1) 00:07:48.580 5873.034 - 5898.240: 0.0128% ( 1) 00:07:48.580 5948.652 - 5973.858: 0.0192% ( 1) 00:07:48.580 5999.065 - 6024.271: 0.0320% ( 2) 00:07:48.580 6024.271 - 6049.477: 0.0384% ( 1) 00:07:48.580 6049.477 - 6074.683: 0.0448% ( 1) 00:07:48.580 6074.683 - 6099.889: 0.0640% ( 3) 00:07:48.580 6099.889 - 6125.095: 0.1345% ( 11) 00:07:48.580 6125.095 - 6150.302: 0.2561% ( 19) 00:07:48.580 6150.302 - 6175.508: 0.4226% ( 26) 00:07:48.580 6175.508 - 6200.714: 0.8645% ( 69) 00:07:48.580 6200.714 - 6225.920: 1.2679% ( 63) 00:07:48.580 6225.920 - 6251.126: 2.0044% ( 115) 00:07:48.580 6251.126 - 6276.332: 3.2467% ( 194) 00:07:48.580 6276.332 - 6301.538: 4.6171% ( 214) 00:07:48.580 6301.538 - 6326.745: 7.1593% ( 397) 00:07:48.580 6326.745 - 6351.951: 8.3632% ( 188) 00:07:48.580 6351.951 - 6377.157: 10.0538% ( 264) 00:07:48.580 6377.157 - 6402.363: 11.2449% ( 186) 00:07:48.580 6402.363 - 6427.569: 12.9034% ( 259) 00:07:48.580 6427.569 - 6452.775: 14.9654% ( 322) 00:07:48.580 6452.775 - 6503.188: 18.8525% ( 607) 00:07:48.580 6503.188 - 6553.600: 23.4887% ( 724) 00:07:48.580 6553.600 - 6604.012: 27.6063% ( 643) 00:07:48.580 6604.012 - 6654.425: 30.6865% ( 481) 00:07:48.580 6654.425 - 6704.837: 34.0932% ( 532) 00:07:48.580 6704.837 - 6755.249: 37.5448% ( 539) 00:07:48.580 6755.249 - 6805.662: 41.0348% ( 545) 00:07:48.580 6805.662 - 6856.074: 45.5046% ( 698) 00:07:48.580 6856.074 - 6906.486: 48.9562% ( 539) 00:07:48.580 6906.486 - 6956.898: 51.4472% ( 389) 00:07:48.580 6956.898 - 7007.311: 53.2659% ( 284) 00:07:48.580 7007.311 - 7057.723: 54.7131% ( 226) 00:07:48.580 7057.723 - 7108.135: 55.8145% ( 172) 00:07:48.580 7108.135 - 7158.548: 57.3578% ( 241) 00:07:48.580 7158.548 - 7208.960: 58.5617% ( 188) 00:07:48.580 7208.960 - 7259.372: 59.3814% ( 128) 00:07:48.580 7259.372 - 7309.785: 60.1947% ( 127) 00:07:48.580 7309.785 - 7360.197: 61.2193% ( 160) 00:07:48.580 7360.197 - 7410.609: 61.8020% ( 91) 00:07:48.580 7410.609 - 7461.022: 62.1862% ( 60) 00:07:48.580 7461.022 - 7511.434: 62.9483% ( 119) 00:07:48.580 7511.434 - 7561.846: 63.4413% ( 77) 00:07:48.580 7561.846 - 7612.258: 63.8576% ( 65) 00:07:48.580 7612.258 - 7662.671: 64.3443% ( 76) 00:07:48.580 7662.671 - 7713.083: 64.7669% ( 66) 00:07:48.580 7713.083 - 7763.495: 65.4393% ( 105) 00:07:48.580 7763.495 - 7813.908: 65.9836% ( 85) 00:07:48.580 7813.908 - 7864.320: 66.3678% ( 60) 00:07:48.580 7864.320 - 7914.732: 66.5407% ( 27) 00:07:48.580 7914.732 - 7965.145: 66.8289% ( 45) 00:07:48.580 7965.145 - 8015.557: 67.0722% ( 38) 00:07:48.580 8015.557 - 8065.969: 67.3476% ( 43) 00:07:48.580 8065.969 - 8116.382: 67.5525% ( 32) 00:07:48.580 8116.382 - 8166.794: 67.9047% ( 55) 00:07:48.580 8166.794 - 8217.206: 68.4362% ( 83) 00:07:48.580 8217.206 - 8267.618: 68.8140% ( 59) 00:07:48.580 8267.618 - 8318.031: 69.1598% ( 54) 00:07:48.580 8318.031 - 8368.443: 69.3776% ( 34) 00:07:48.580 8368.443 - 8418.855: 69.5953% ( 34) 00:07:48.580 8418.855 - 8469.268: 69.7874% ( 30) 00:07:48.580 8469.268 - 8519.680: 69.9603% ( 27) 00:07:48.580 8519.680 - 8570.092: 70.1780% ( 34) 00:07:48.580 8570.092 - 8620.505: 70.4918% ( 49) 00:07:48.580 8620.505 - 8670.917: 70.7608% ( 42) 00:07:48.580 8670.917 - 8721.329: 71.1514% ( 61) 00:07:48.580 8721.329 - 8771.742: 71.4075% ( 40) 00:07:48.580 8771.742 - 8822.154: 71.7918% ( 60) 00:07:48.580 8822.154 - 8872.566: 72.1376% ( 54) 00:07:48.580 8872.566 - 8922.978: 72.5730% ( 68) 00:07:48.580 8922.978 - 8973.391: 73.1429% ( 89) 00:07:48.580 8973.391 - 9023.803: 73.6616% ( 81) 00:07:48.580 9023.803 - 9074.215: 74.1931% ( 83) 00:07:48.580 9074.215 - 9124.628: 74.9552% ( 119) 00:07:48.580 9124.628 - 9175.040: 75.6788% ( 113) 00:07:48.580 9175.040 - 9225.452: 76.1142% ( 68) 00:07:48.580 9225.452 - 9275.865: 76.6329% ( 81) 00:07:48.580 9275.865 - 9326.277: 77.1260% ( 77) 00:07:48.580 9326.277 - 9376.689: 77.5679% ( 69) 00:07:48.580 9376.689 - 9427.102: 78.0033% ( 68) 00:07:48.580 9427.102 - 9477.514: 78.3876% ( 60) 00:07:48.580 9477.514 - 9527.926: 78.8742% ( 76) 00:07:48.580 9527.926 - 9578.338: 79.2841% ( 64) 00:07:48.580 9578.338 - 9628.751: 79.6171% ( 52) 00:07:48.580 9628.751 - 9679.163: 79.8540% ( 37) 00:07:48.580 9679.163 - 9729.575: 80.0973% ( 38) 00:07:48.580 9729.575 - 9779.988: 80.4559% ( 56) 00:07:48.580 9779.988 - 9830.400: 80.6352% ( 28) 00:07:48.580 9830.400 - 9880.812: 80.8274% ( 30) 00:07:48.580 9880.812 - 9931.225: 81.1411% ( 49) 00:07:48.580 9931.225 - 9981.637: 81.3268% ( 29) 00:07:48.580 9981.637 - 10032.049: 81.6919% ( 57) 00:07:48.580 10032.049 - 10082.462: 82.0056% ( 49) 00:07:48.580 10082.462 - 10132.874: 82.2618% ( 40) 00:07:48.580 10132.874 - 10183.286: 82.5179% ( 40) 00:07:48.580 10183.286 - 10233.698: 82.7869% ( 42) 00:07:48.580 10233.698 - 10284.111: 83.1583% ( 58) 00:07:48.580 10284.111 - 10334.523: 83.5489% ( 61) 00:07:48.580 10334.523 - 10384.935: 83.9203% ( 58) 00:07:48.580 10384.935 - 10435.348: 84.2085% ( 45) 00:07:48.580 10435.348 - 10485.760: 84.4518% ( 38) 00:07:48.580 10485.760 - 10536.172: 84.6119% ( 25) 00:07:48.580 10536.172 - 10586.585: 84.7592% ( 23) 00:07:48.580 10586.585 - 10636.997: 84.9257% ( 26) 00:07:48.580 10636.997 - 10687.409: 85.0282% ( 16) 00:07:48.580 10687.409 - 10737.822: 85.1370% ( 17) 00:07:48.580 10737.822 - 10788.234: 85.2459% ( 17) 00:07:48.580 10788.234 - 10838.646: 85.3740% ( 20) 00:07:48.580 10838.646 - 10889.058: 85.5085% ( 21) 00:07:48.580 10889.058 - 10939.471: 85.6429% ( 21) 00:07:48.581 10939.471 - 10989.883: 85.8158% ( 27) 00:07:48.581 10989.883 - 11040.295: 86.1744% ( 56) 00:07:48.581 11040.295 - 11090.708: 86.3665% ( 30) 00:07:48.581 11090.708 - 11141.120: 86.5843% ( 34) 00:07:48.581 11141.120 - 11191.532: 86.9109% ( 51) 00:07:48.581 11191.532 - 11241.945: 87.3591% ( 70) 00:07:48.581 11241.945 - 11292.357: 87.6793% ( 50) 00:07:48.581 11292.357 - 11342.769: 88.1276% ( 70) 00:07:48.581 11342.769 - 11393.182: 88.4926% ( 57) 00:07:48.581 11393.182 - 11443.594: 88.7039% ( 33) 00:07:48.581 11443.594 - 11494.006: 88.9024% ( 31) 00:07:48.581 11494.006 - 11544.418: 89.0689% ( 26) 00:07:48.581 11544.418 - 11594.831: 89.2674% ( 31) 00:07:48.581 11594.831 - 11645.243: 89.4019% ( 21) 00:07:48.581 11645.243 - 11695.655: 89.5812% ( 28) 00:07:48.581 11695.655 - 11746.068: 89.7221% ( 22) 00:07:48.581 11746.068 - 11796.480: 89.8502% ( 20) 00:07:48.581 11796.480 - 11846.892: 89.9782% ( 20) 00:07:48.581 11846.892 - 11897.305: 90.1447% ( 26) 00:07:48.581 11897.305 - 11947.717: 90.3176% ( 27) 00:07:48.581 11947.717 - 11998.129: 90.4521% ( 21) 00:07:48.581 11998.129 - 12048.542: 90.5738% ( 19) 00:07:48.581 12048.542 - 12098.954: 90.7979% ( 35) 00:07:48.581 12098.954 - 12149.366: 91.1373% ( 53) 00:07:48.581 12149.366 - 12199.778: 91.3294% ( 30) 00:07:48.581 12199.778 - 12250.191: 91.4959% ( 26) 00:07:48.581 12250.191 - 12300.603: 91.7328% ( 37) 00:07:48.581 12300.603 - 12351.015: 92.0466% ( 49) 00:07:48.581 12351.015 - 12401.428: 92.3476% ( 47) 00:07:48.581 12401.428 - 12451.840: 92.5333% ( 29) 00:07:48.581 12451.840 - 12502.252: 92.7190% ( 29) 00:07:48.581 12502.252 - 12552.665: 92.8791% ( 25) 00:07:48.581 12552.665 - 12603.077: 93.0328% ( 24) 00:07:48.581 12603.077 - 12653.489: 93.3338% ( 47) 00:07:48.581 12653.489 - 12703.902: 93.6475% ( 49) 00:07:48.581 12703.902 - 12754.314: 94.1214% ( 74) 00:07:48.581 12754.314 - 12804.726: 94.3263% ( 32) 00:07:48.581 12804.726 - 12855.138: 94.4032% ( 12) 00:07:48.581 12855.138 - 12905.551: 94.4736% ( 11) 00:07:48.581 12905.551 - 13006.375: 94.5953% ( 19) 00:07:48.581 13006.375 - 13107.200: 94.7234% ( 20) 00:07:48.581 13107.200 - 13208.025: 94.9347% ( 33) 00:07:48.581 13208.025 - 13308.849: 95.1460% ( 33) 00:07:48.581 13308.849 - 13409.674: 95.4214% ( 43) 00:07:48.581 13409.674 - 13510.498: 95.6199% ( 31) 00:07:48.581 13510.498 - 13611.323: 96.0105% ( 61) 00:07:48.581 13611.323 - 13712.148: 96.4203% ( 64) 00:07:48.581 13712.148 - 13812.972: 96.7533% ( 52) 00:07:48.581 13812.972 - 13913.797: 97.1376% ( 60) 00:07:48.581 13913.797 - 14014.622: 97.5730% ( 68) 00:07:48.581 14014.622 - 14115.446: 98.2710% ( 109) 00:07:48.581 14115.446 - 14216.271: 98.5272% ( 40) 00:07:48.581 14216.271 - 14317.095: 98.7449% ( 34) 00:07:48.581 14317.095 - 14417.920: 98.9050% ( 25) 00:07:48.581 14417.920 - 14518.745: 98.9882% ( 13) 00:07:48.581 14518.745 - 14619.569: 99.0523% ( 10) 00:07:48.581 14619.569 - 14720.394: 99.1035% ( 8) 00:07:48.581 14720.394 - 14821.218: 99.1291% ( 4) 00:07:48.581 14821.218 - 14922.043: 99.1483% ( 3) 00:07:48.581 14922.043 - 15022.868: 99.1611% ( 2) 00:07:48.581 15022.868 - 15123.692: 99.1803% ( 3) 00:07:48.581 24802.855 - 24903.680: 99.1995% ( 3) 00:07:48.581 24903.680 - 25004.505: 99.2252% ( 4) 00:07:48.581 25004.505 - 25105.329: 99.2508% ( 4) 00:07:48.581 25105.329 - 25206.154: 99.2700% ( 3) 00:07:48.581 25206.154 - 25306.978: 99.2956% ( 4) 00:07:48.581 25306.978 - 25407.803: 99.3212% ( 4) 00:07:48.581 25407.803 - 25508.628: 99.3468% ( 4) 00:07:48.581 25508.628 - 25609.452: 99.3724% ( 4) 00:07:48.581 25609.452 - 25710.277: 99.3916% ( 3) 00:07:48.581 25710.277 - 25811.102: 99.4173% ( 4) 00:07:48.581 25811.102 - 26012.751: 99.4621% ( 7) 00:07:48.581 26012.751 - 26214.400: 99.5133% ( 8) 00:07:48.581 26214.400 - 26416.049: 99.5581% ( 7) 00:07:48.581 26416.049 - 26617.698: 99.5902% ( 5) 00:07:48.581 31053.982 - 31255.631: 99.6158% ( 4) 00:07:48.581 31255.631 - 31457.280: 99.6670% ( 8) 00:07:48.581 31457.280 - 31658.929: 99.7118% ( 7) 00:07:48.581 31658.929 - 31860.578: 99.7567% ( 7) 00:07:48.581 31860.578 - 32062.228: 99.8079% ( 8) 00:07:48.581 32062.228 - 32263.877: 99.8527% ( 7) 00:07:48.581 32263.877 - 32465.526: 99.8975% ( 7) 00:07:48.581 32465.526 - 32667.175: 99.9488% ( 8) 00:07:48.581 32667.175 - 32868.825: 100.0000% ( 8) 00:07:48.581 00:07:48.581 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.581 ============================================================================== 00:07:48.581 Range in us Cumulative IO count 00:07:48.581 5747.003 - 5772.209: 0.0064% ( 1) 00:07:48.581 5822.622 - 5847.828: 0.0128% ( 1) 00:07:48.581 5898.240 - 5923.446: 0.0256% ( 2) 00:07:48.581 5923.446 - 5948.652: 0.0320% ( 1) 00:07:48.581 5948.652 - 5973.858: 0.0576% ( 4) 00:07:48.581 5973.858 - 5999.065: 0.0897% ( 5) 00:07:48.581 5999.065 - 6024.271: 0.1217% ( 5) 00:07:48.581 6024.271 - 6049.477: 0.1729% ( 8) 00:07:48.581 6049.477 - 6074.683: 0.2433% ( 11) 00:07:48.581 6074.683 - 6099.889: 0.3586% ( 18) 00:07:48.581 6099.889 - 6125.095: 0.5187% ( 25) 00:07:48.581 6125.095 - 6150.302: 0.7620% ( 38) 00:07:48.581 6150.302 - 6175.508: 1.0694% ( 48) 00:07:48.581 6175.508 - 6200.714: 1.5945% ( 82) 00:07:48.581 6200.714 - 6225.920: 2.1901% ( 93) 00:07:48.581 6225.920 - 6251.126: 3.1698% ( 153) 00:07:48.581 6251.126 - 6276.332: 4.2905% ( 175) 00:07:48.581 6276.332 - 6301.538: 5.7121% ( 222) 00:07:48.581 6301.538 - 6326.745: 7.1465% ( 224) 00:07:48.581 6326.745 - 6351.951: 8.8755% ( 270) 00:07:48.581 6351.951 - 6377.157: 10.3420% ( 229) 00:07:48.581 6377.157 - 6402.363: 12.4360% ( 327) 00:07:48.581 6402.363 - 6427.569: 14.5044% ( 323) 00:07:48.581 6427.569 - 6452.775: 16.1949% ( 264) 00:07:48.581 6452.775 - 6503.188: 19.6465% ( 539) 00:07:48.581 6503.188 - 6553.600: 23.8730% ( 660) 00:07:48.581 6553.600 - 6604.012: 27.2285% ( 524) 00:07:48.581 6604.012 - 6654.425: 30.3087% ( 481) 00:07:48.581 6654.425 - 6704.837: 32.7613% ( 383) 00:07:48.581 6704.837 - 6755.249: 36.2449% ( 544) 00:07:48.581 6755.249 - 6805.662: 40.4777% ( 661) 00:07:48.581 6805.662 - 6856.074: 44.2623% ( 591) 00:07:48.581 6856.074 - 6906.486: 47.5346% ( 511) 00:07:48.581 6906.486 - 6956.898: 50.3394% ( 438) 00:07:48.581 6956.898 - 7007.311: 52.7152% ( 371) 00:07:48.581 7007.311 - 7057.723: 54.2456% ( 239) 00:07:48.581 7057.723 - 7108.135: 56.0451% ( 281) 00:07:48.581 7108.135 - 7158.548: 57.7805% ( 271) 00:07:48.581 7158.548 - 7208.960: 59.2405% ( 228) 00:07:48.581 7208.960 - 7259.372: 60.2651% ( 160) 00:07:48.581 7259.372 - 7309.785: 61.2129% ( 148) 00:07:48.581 7309.785 - 7360.197: 62.0389% ( 129) 00:07:48.581 7360.197 - 7410.609: 62.9611% ( 144) 00:07:48.581 7410.609 - 7461.022: 63.6014% ( 100) 00:07:48.581 7461.022 - 7511.434: 64.6068% ( 157) 00:07:48.581 7511.434 - 7561.846: 65.2856% ( 106) 00:07:48.581 7561.846 - 7612.258: 65.5802% ( 46) 00:07:48.581 7612.258 - 7662.671: 65.8363% ( 40) 00:07:48.581 7662.671 - 7713.083: 66.0925% ( 40) 00:07:48.581 7713.083 - 7763.495: 66.3038% ( 33) 00:07:48.581 7763.495 - 7813.908: 66.9121% ( 95) 00:07:48.581 7813.908 - 7864.320: 67.2003% ( 45) 00:07:48.581 7864.320 - 7914.732: 67.3476% ( 23) 00:07:48.581 7914.732 - 7965.145: 67.5845% ( 37) 00:07:48.581 7965.145 - 8015.557: 67.7510% ( 26) 00:07:48.581 8015.557 - 8065.969: 67.8663% ( 18) 00:07:48.581 8065.969 - 8116.382: 67.9880% ( 19) 00:07:48.581 8116.382 - 8166.794: 68.2185% ( 36) 00:07:48.581 8166.794 - 8217.206: 68.5067% ( 45) 00:07:48.581 8217.206 - 8267.618: 68.7052% ( 31) 00:07:48.581 8267.618 - 8318.031: 68.9741% ( 42) 00:07:48.581 8318.031 - 8368.443: 69.2367% ( 41) 00:07:48.581 8368.443 - 8418.855: 69.6913% ( 71) 00:07:48.581 8418.855 - 8469.268: 70.0179% ( 51) 00:07:48.581 8469.268 - 8519.680: 70.2421% ( 35) 00:07:48.581 8519.680 - 8570.092: 70.5110% ( 42) 00:07:48.581 8570.092 - 8620.505: 70.8312% ( 50) 00:07:48.581 8620.505 - 8670.917: 71.1386% ( 48) 00:07:48.581 8670.917 - 8721.329: 71.5932% ( 71) 00:07:48.581 8721.329 - 8771.742: 72.0223% ( 67) 00:07:48.581 8771.742 - 8822.154: 72.5346% ( 80) 00:07:48.581 8822.154 - 8872.566: 72.9828% ( 70) 00:07:48.581 8872.566 - 8922.978: 73.4183% ( 68) 00:07:48.581 8922.978 - 8973.391: 73.7577% ( 53) 00:07:48.581 8973.391 - 9023.803: 74.0394% ( 44) 00:07:48.581 9023.803 - 9074.215: 74.3596% ( 50) 00:07:48.581 9074.215 - 9124.628: 74.7118% ( 55) 00:07:48.581 9124.628 - 9175.040: 75.0640% ( 55) 00:07:48.581 9175.040 - 9225.452: 75.5251% ( 72) 00:07:48.581 9225.452 - 9275.865: 75.9670% ( 69) 00:07:48.581 9275.865 - 9326.277: 76.2871% ( 50) 00:07:48.581 9326.277 - 9376.689: 76.7098% ( 66) 00:07:48.581 9376.689 - 9427.102: 77.2349% ( 82) 00:07:48.581 9427.102 - 9477.514: 77.7280% ( 77) 00:07:48.581 9477.514 - 9527.926: 78.1762% ( 70) 00:07:48.581 9527.926 - 9578.338: 78.7205% ( 85) 00:07:48.581 9578.338 - 9628.751: 79.3737% ( 102) 00:07:48.581 9628.751 - 9679.163: 79.7900% ( 65) 00:07:48.581 9679.163 - 9729.575: 80.1806% ( 61) 00:07:48.581 9729.575 - 9779.988: 80.6288% ( 70) 00:07:48.581 9779.988 - 9830.400: 81.0515% ( 66) 00:07:48.581 9830.400 - 9880.812: 81.4613% ( 64) 00:07:48.581 9880.812 - 9931.225: 81.7751% ( 49) 00:07:48.581 9931.225 - 9981.637: 82.0697% ( 46) 00:07:48.581 9981.637 - 10032.049: 82.3578% ( 45) 00:07:48.581 10032.049 - 10082.462: 82.6844% ( 51) 00:07:48.581 10082.462 - 10132.874: 83.0174% ( 52) 00:07:48.581 10132.874 - 10183.286: 83.3888% ( 58) 00:07:48.581 10183.286 - 10233.698: 83.6962% ( 48) 00:07:48.581 10233.698 - 10284.111: 83.9395% ( 38) 00:07:48.581 10284.111 - 10334.523: 84.2918% ( 55) 00:07:48.581 10334.523 - 10384.935: 84.6696% ( 59) 00:07:48.582 10384.935 - 10435.348: 84.9834% ( 49) 00:07:48.582 10435.348 - 10485.760: 85.2139% ( 36) 00:07:48.582 10485.760 - 10536.172: 85.4316% ( 34) 00:07:48.582 10536.172 - 10586.585: 85.6429% ( 33) 00:07:48.582 10586.585 - 10636.997: 85.8286% ( 29) 00:07:48.582 10636.997 - 10687.409: 85.9311% ( 16) 00:07:48.582 10687.409 - 10737.822: 86.0528% ( 19) 00:07:48.582 10737.822 - 10788.234: 86.1808% ( 20) 00:07:48.582 10788.234 - 10838.646: 86.3089% ( 20) 00:07:48.582 10838.646 - 10889.058: 86.4242% ( 18) 00:07:48.582 10889.058 - 10939.471: 86.5074% ( 13) 00:07:48.582 10939.471 - 10989.883: 86.6355% ( 20) 00:07:48.582 10989.883 - 11040.295: 86.6931% ( 9) 00:07:48.582 11040.295 - 11090.708: 86.7572% ( 10) 00:07:48.582 11090.708 - 11141.120: 86.7956% ( 6) 00:07:48.582 11141.120 - 11191.532: 86.8724% ( 12) 00:07:48.582 11191.532 - 11241.945: 86.9813% ( 17) 00:07:48.582 11241.945 - 11292.357: 87.1030% ( 19) 00:07:48.582 11292.357 - 11342.769: 87.2695% ( 26) 00:07:48.582 11342.769 - 11393.182: 87.7049% ( 68) 00:07:48.582 11393.182 - 11443.594: 87.8266% ( 19) 00:07:48.582 11443.594 - 11494.006: 87.9675% ( 22) 00:07:48.582 11494.006 - 11544.418: 88.0891% ( 19) 00:07:48.582 11544.418 - 11594.831: 88.1788% ( 14) 00:07:48.582 11594.831 - 11645.243: 88.2877% ( 17) 00:07:48.582 11645.243 - 11695.655: 88.3837% ( 15) 00:07:48.582 11695.655 - 11746.068: 88.4734% ( 14) 00:07:48.582 11746.068 - 11796.480: 88.6527% ( 28) 00:07:48.582 11796.480 - 11846.892: 88.8256% ( 27) 00:07:48.582 11846.892 - 11897.305: 88.9921% ( 26) 00:07:48.582 11897.305 - 11947.717: 89.1714% ( 28) 00:07:48.582 11947.717 - 11998.129: 89.3763% ( 32) 00:07:48.582 11998.129 - 12048.542: 89.6452% ( 42) 00:07:48.582 12048.542 - 12098.954: 89.9398% ( 46) 00:07:48.582 12098.954 - 12149.366: 90.1960% ( 40) 00:07:48.582 12149.366 - 12199.778: 90.6378% ( 69) 00:07:48.582 12199.778 - 12250.191: 90.9772% ( 53) 00:07:48.582 12250.191 - 12300.603: 91.4062% ( 67) 00:07:48.582 12300.603 - 12351.015: 91.9826% ( 90) 00:07:48.582 12351.015 - 12401.428: 92.3284% ( 54) 00:07:48.582 12401.428 - 12451.840: 92.6998% ( 58) 00:07:48.582 12451.840 - 12502.252: 93.1993% ( 78) 00:07:48.582 12502.252 - 12552.665: 93.5707% ( 58) 00:07:48.582 12552.665 - 12603.077: 93.8204% ( 39) 00:07:48.582 12603.077 - 12653.489: 94.1086% ( 45) 00:07:48.582 12653.489 - 12703.902: 94.4224% ( 49) 00:07:48.582 12703.902 - 12754.314: 94.5697% ( 23) 00:07:48.582 12754.314 - 12804.726: 94.6721% ( 16) 00:07:48.582 12804.726 - 12855.138: 94.7682% ( 15) 00:07:48.582 12855.138 - 12905.551: 94.8386% ( 11) 00:07:48.582 12905.551 - 13006.375: 94.9219% ( 13) 00:07:48.582 13006.375 - 13107.200: 94.9603% ( 6) 00:07:48.582 13107.200 - 13208.025: 95.0179% ( 9) 00:07:48.582 13208.025 - 13308.849: 95.1780% ( 25) 00:07:48.582 13308.849 - 13409.674: 95.6455% ( 73) 00:07:48.582 13409.674 - 13510.498: 95.8888% ( 38) 00:07:48.582 13510.498 - 13611.323: 96.1514% ( 41) 00:07:48.582 13611.323 - 13712.148: 96.3627% ( 33) 00:07:48.582 13712.148 - 13812.972: 96.5612% ( 31) 00:07:48.582 13812.972 - 13913.797: 96.8174% ( 40) 00:07:48.582 13913.797 - 14014.622: 97.1440% ( 51) 00:07:48.582 14014.622 - 14115.446: 97.6883% ( 85) 00:07:48.582 14115.446 - 14216.271: 98.0469% ( 56) 00:07:48.582 14216.271 - 14317.095: 98.2198% ( 27) 00:07:48.582 14317.095 - 14417.920: 98.3286% ( 17) 00:07:48.582 14417.920 - 14518.745: 98.3671% ( 6) 00:07:48.582 14518.745 - 14619.569: 98.3735% ( 1) 00:07:48.582 14619.569 - 14720.394: 98.3991% ( 4) 00:07:48.582 14720.394 - 14821.218: 98.4631% ( 10) 00:07:48.582 14821.218 - 14922.043: 98.7705% ( 48) 00:07:48.582 14922.043 - 15022.868: 98.8281% ( 9) 00:07:48.582 15022.868 - 15123.692: 98.8665% ( 6) 00:07:48.582 15123.692 - 15224.517: 99.1227% ( 40) 00:07:48.582 15224.517 - 15325.342: 99.1611% ( 6) 00:07:48.582 15325.342 - 15426.166: 99.1803% ( 3) 00:07:48.582 23693.785 - 23794.609: 99.1867% ( 1) 00:07:48.582 23794.609 - 23895.434: 99.2123% ( 4) 00:07:48.582 23895.434 - 23996.258: 99.2316% ( 3) 00:07:48.582 23996.258 - 24097.083: 99.2572% ( 4) 00:07:48.582 24097.083 - 24197.908: 99.2828% ( 4) 00:07:48.582 24197.908 - 24298.732: 99.3084% ( 4) 00:07:48.582 24298.732 - 24399.557: 99.3276% ( 3) 00:07:48.582 24399.557 - 24500.382: 99.3532% ( 4) 00:07:48.582 24500.382 - 24601.206: 99.3788% ( 4) 00:07:48.582 24601.206 - 24702.031: 99.4045% ( 4) 00:07:48.582 24702.031 - 24802.855: 99.4301% ( 4) 00:07:48.582 24802.855 - 24903.680: 99.4557% ( 4) 00:07:48.582 24903.680 - 25004.505: 99.4749% ( 3) 00:07:48.582 25004.505 - 25105.329: 99.5005% ( 4) 00:07:48.582 25105.329 - 25206.154: 99.5261% ( 4) 00:07:48.582 25206.154 - 25306.978: 99.5517% ( 4) 00:07:48.582 25306.978 - 25407.803: 99.5774% ( 4) 00:07:48.582 25407.803 - 25508.628: 99.5902% ( 2) 00:07:48.582 30247.385 - 30449.034: 99.6350% ( 7) 00:07:48.582 30449.034 - 30650.683: 99.6798% ( 7) 00:07:48.582 30650.683 - 30852.332: 99.7310% ( 8) 00:07:48.582 30852.332 - 31053.982: 99.7823% ( 8) 00:07:48.582 31053.982 - 31255.631: 99.8271% ( 7) 00:07:48.582 31255.631 - 31457.280: 99.8783% ( 8) 00:07:48.582 31457.280 - 31658.929: 99.9168% ( 6) 00:07:48.582 31658.929 - 31860.578: 99.9680% ( 8) 00:07:48.582 31860.578 - 32062.228: 100.0000% ( 5) 00:07:48.582 00:07:48.582 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.582 ============================================================================== 00:07:48.582 Range in us Cumulative IO count 00:07:48.582 5847.828 - 5873.034: 0.0064% ( 1) 00:07:48.582 5873.034 - 5898.240: 0.0128% ( 1) 00:07:48.582 5898.240 - 5923.446: 0.0320% ( 3) 00:07:48.582 5923.446 - 5948.652: 0.0384% ( 1) 00:07:48.582 5948.652 - 5973.858: 0.0576% ( 3) 00:07:48.582 5973.858 - 5999.065: 0.0640% ( 1) 00:07:48.582 5999.065 - 6024.271: 0.0768% ( 2) 00:07:48.582 6024.271 - 6049.477: 0.1089% ( 5) 00:07:48.582 6049.477 - 6074.683: 0.1665% ( 9) 00:07:48.582 6074.683 - 6099.889: 0.2497% ( 13) 00:07:48.582 6099.889 - 6125.095: 0.3970% ( 23) 00:07:48.582 6125.095 - 6150.302: 0.6084% ( 33) 00:07:48.582 6150.302 - 6175.508: 0.9285% ( 50) 00:07:48.582 6175.508 - 6200.714: 1.3896% ( 72) 00:07:48.582 6200.714 - 6225.920: 2.0684% ( 106) 00:07:48.582 6225.920 - 6251.126: 2.9521% ( 138) 00:07:48.582 6251.126 - 6276.332: 3.9575% ( 157) 00:07:48.582 6276.332 - 6301.538: 5.5840% ( 254) 00:07:48.582 6301.538 - 6326.745: 7.4219% ( 287) 00:07:48.582 6326.745 - 6351.951: 9.0676% ( 257) 00:07:48.582 6351.951 - 6377.157: 10.7838% ( 268) 00:07:48.582 6377.157 - 6402.363: 12.5704% ( 279) 00:07:48.582 6402.363 - 6427.569: 14.8630% ( 358) 00:07:48.582 6427.569 - 6452.775: 16.5151% ( 258) 00:07:48.582 6452.775 - 6503.188: 20.1588% ( 569) 00:07:48.582 6503.188 - 6553.600: 23.8794% ( 581) 00:07:48.582 6553.600 - 6604.012: 26.9659% ( 482) 00:07:48.582 6604.012 - 6654.425: 29.5530% ( 404) 00:07:48.582 6654.425 - 6704.837: 32.7293% ( 496) 00:07:48.582 6704.837 - 6755.249: 36.2385% ( 548) 00:07:48.582 6755.249 - 6805.662: 40.4329% ( 655) 00:07:48.582 6805.662 - 6856.074: 44.2495% ( 596) 00:07:48.582 6856.074 - 6906.486: 47.5986% ( 523) 00:07:48.582 6906.486 - 6956.898: 50.8453% ( 507) 00:07:48.582 6956.898 - 7007.311: 53.0994% ( 352) 00:07:48.582 7007.311 - 7057.723: 55.0909% ( 311) 00:07:48.582 7057.723 - 7108.135: 56.9224% ( 286) 00:07:48.582 7108.135 - 7158.548: 58.6898% ( 276) 00:07:48.582 7158.548 - 7208.960: 59.8040% ( 174) 00:07:48.582 7208.960 - 7259.372: 60.7134% ( 142) 00:07:48.582 7259.372 - 7309.785: 61.5907% ( 137) 00:07:48.582 7309.785 - 7360.197: 62.2503% ( 103) 00:07:48.582 7360.197 - 7410.609: 62.7049% ( 71) 00:07:48.582 7410.609 - 7461.022: 63.1019% ( 62) 00:07:48.582 7461.022 - 7511.434: 63.8128% ( 111) 00:07:48.582 7511.434 - 7561.846: 64.4339% ( 97) 00:07:48.582 7561.846 - 7612.258: 64.9206% ( 76) 00:07:48.582 7612.258 - 7662.671: 65.5289% ( 95) 00:07:48.582 7662.671 - 7713.083: 65.8683% ( 53) 00:07:48.582 7713.083 - 7763.495: 66.0989% ( 36) 00:07:48.582 7763.495 - 7813.908: 66.3038% ( 32) 00:07:48.582 7813.908 - 7864.320: 66.6944% ( 61) 00:07:48.583 7864.320 - 7914.732: 66.8737% ( 28) 00:07:48.583 7914.732 - 7965.145: 67.2579% ( 60) 00:07:48.583 7965.145 - 8015.557: 67.4949% ( 37) 00:07:48.583 8015.557 - 8065.969: 67.7318% ( 37) 00:07:48.583 8065.969 - 8116.382: 68.0008% ( 42) 00:07:48.583 8116.382 - 8166.794: 68.1609% ( 25) 00:07:48.583 8166.794 - 8217.206: 68.3530% ( 30) 00:07:48.583 8217.206 - 8267.618: 68.6219% ( 42) 00:07:48.583 8267.618 - 8318.031: 68.8076% ( 29) 00:07:48.583 8318.031 - 8368.443: 69.0318% ( 35) 00:07:48.583 8368.443 - 8418.855: 69.4736% ( 69) 00:07:48.583 8418.855 - 8469.268: 69.7426% ( 42) 00:07:48.583 8469.268 - 8519.680: 70.0628% ( 50) 00:07:48.583 8519.680 - 8570.092: 70.3445% ( 44) 00:07:48.583 8570.092 - 8620.505: 70.6583% ( 49) 00:07:48.583 8620.505 - 8670.917: 71.0297% ( 58) 00:07:48.583 8670.917 - 8721.329: 71.2666% ( 37) 00:07:48.583 8721.329 - 8771.742: 71.5420% ( 43) 00:07:48.583 8771.742 - 8822.154: 71.8494% ( 48) 00:07:48.583 8822.154 - 8872.566: 72.1504% ( 47) 00:07:48.583 8872.566 - 8922.978: 72.4962% ( 54) 00:07:48.583 8922.978 - 8973.391: 72.8676% ( 58) 00:07:48.583 8973.391 - 9023.803: 73.2902% ( 66) 00:07:48.583 9023.803 - 9074.215: 73.9690% ( 106) 00:07:48.583 9074.215 - 9124.628: 74.5261% ( 87) 00:07:48.583 9124.628 - 9175.040: 75.1153% ( 92) 00:07:48.583 9175.040 - 9225.452: 75.8837% ( 120) 00:07:48.583 9225.452 - 9275.865: 76.2871% ( 63) 00:07:48.583 9275.865 - 9326.277: 76.8315% ( 85) 00:07:48.583 9326.277 - 9376.689: 77.2477% ( 65) 00:07:48.583 9376.689 - 9427.102: 77.7344% ( 76) 00:07:48.583 9427.102 - 9477.514: 78.3876% ( 102) 00:07:48.583 9477.514 - 9527.926: 78.9062% ( 81) 00:07:48.583 9527.926 - 9578.338: 79.3673% ( 72) 00:07:48.583 9578.338 - 9628.751: 79.7131% ( 54) 00:07:48.583 9628.751 - 9679.163: 80.0525% ( 53) 00:07:48.583 9679.163 - 9729.575: 80.5456% ( 77) 00:07:48.583 9729.575 - 9779.988: 80.9042% ( 56) 00:07:48.583 9779.988 - 9830.400: 81.3461% ( 69) 00:07:48.583 9830.400 - 9880.812: 81.8968% ( 86) 00:07:48.583 9880.812 - 9931.225: 82.3386% ( 69) 00:07:48.583 9931.225 - 9981.637: 82.5948% ( 40) 00:07:48.583 9981.637 - 10032.049: 82.8125% ( 34) 00:07:48.583 10032.049 - 10082.462: 83.0174% ( 32) 00:07:48.583 10082.462 - 10132.874: 83.2031% ( 29) 00:07:48.583 10132.874 - 10183.286: 83.3696% ( 26) 00:07:48.583 10183.286 - 10233.698: 83.5745% ( 32) 00:07:48.583 10233.698 - 10284.111: 83.8435% ( 42) 00:07:48.583 10284.111 - 10334.523: 84.1637% ( 50) 00:07:48.583 10334.523 - 10384.935: 84.3494% ( 29) 00:07:48.583 10384.935 - 10435.348: 84.4903% ( 22) 00:07:48.583 10435.348 - 10485.760: 84.6119% ( 19) 00:07:48.583 10485.760 - 10536.172: 84.7592% ( 23) 00:07:48.583 10536.172 - 10586.585: 84.9001% ( 22) 00:07:48.583 10586.585 - 10636.997: 85.0346% ( 21) 00:07:48.583 10636.997 - 10687.409: 85.1498% ( 18) 00:07:48.583 10687.409 - 10737.822: 85.3484% ( 31) 00:07:48.583 10737.822 - 10788.234: 85.4892% ( 22) 00:07:48.583 10788.234 - 10838.646: 85.7006% ( 33) 00:07:48.583 10838.646 - 10889.058: 85.8927% ( 30) 00:07:48.583 10889.058 - 10939.471: 86.1616% ( 42) 00:07:48.583 10939.471 - 10989.883: 86.3986% ( 37) 00:07:48.583 10989.883 - 11040.295: 86.5587% ( 25) 00:07:48.583 11040.295 - 11090.708: 86.7508% ( 30) 00:07:48.583 11090.708 - 11141.120: 86.9557% ( 32) 00:07:48.583 11141.120 - 11191.532: 87.1222% ( 26) 00:07:48.583 11191.532 - 11241.945: 87.2823% ( 25) 00:07:48.583 11241.945 - 11292.357: 87.4808% ( 31) 00:07:48.583 11292.357 - 11342.769: 87.8650% ( 60) 00:07:48.583 11342.769 - 11393.182: 88.0571% ( 30) 00:07:48.583 11393.182 - 11443.594: 88.2492% ( 30) 00:07:48.583 11443.594 - 11494.006: 88.4413% ( 30) 00:07:48.583 11494.006 - 11544.418: 88.6527% ( 33) 00:07:48.583 11544.418 - 11594.831: 88.8704% ( 34) 00:07:48.583 11594.831 - 11645.243: 89.0497% ( 28) 00:07:48.583 11645.243 - 11695.655: 89.1842% ( 21) 00:07:48.583 11695.655 - 11746.068: 89.2930% ( 17) 00:07:48.583 11746.068 - 11796.480: 89.3827% ( 14) 00:07:48.583 11796.480 - 11846.892: 89.4851% ( 16) 00:07:48.583 11846.892 - 11897.305: 89.5556% ( 11) 00:07:48.583 11897.305 - 11947.717: 89.6260% ( 11) 00:07:48.583 11947.717 - 11998.129: 89.7541% ( 20) 00:07:48.583 11998.129 - 12048.542: 89.9782% ( 35) 00:07:48.583 12048.542 - 12098.954: 90.4393% ( 72) 00:07:48.583 12098.954 - 12149.366: 90.8235% ( 60) 00:07:48.583 12149.366 - 12199.778: 91.1117% ( 45) 00:07:48.583 12199.778 - 12250.191: 91.3742% ( 41) 00:07:48.583 12250.191 - 12300.603: 91.8545% ( 75) 00:07:48.583 12300.603 - 12351.015: 92.0850% ( 36) 00:07:48.583 12351.015 - 12401.428: 92.2067% ( 19) 00:07:48.583 12401.428 - 12451.840: 92.3220% ( 18) 00:07:48.583 12451.840 - 12502.252: 92.4372% ( 18) 00:07:48.583 12502.252 - 12552.665: 92.5653% ( 20) 00:07:48.583 12552.665 - 12603.077: 92.6934% ( 20) 00:07:48.583 12603.077 - 12653.489: 93.1160% ( 66) 00:07:48.583 12653.489 - 12703.902: 93.2889% ( 27) 00:07:48.583 12703.902 - 12754.314: 93.5067% ( 34) 00:07:48.583 12754.314 - 12804.726: 93.6732% ( 26) 00:07:48.583 12804.726 - 12855.138: 93.9357% ( 41) 00:07:48.583 12855.138 - 12905.551: 94.1214% ( 29) 00:07:48.583 12905.551 - 13006.375: 94.6337% ( 80) 00:07:48.583 13006.375 - 13107.200: 95.0692% ( 68) 00:07:48.583 13107.200 - 13208.025: 95.2805% ( 33) 00:07:48.583 13208.025 - 13308.849: 95.5046% ( 35) 00:07:48.583 13308.849 - 13409.674: 95.7544% ( 39) 00:07:48.583 13409.674 - 13510.498: 95.9144% ( 25) 00:07:48.583 13510.498 - 13611.323: 96.1770% ( 41) 00:07:48.583 13611.323 - 13712.148: 96.3947% ( 34) 00:07:48.583 13712.148 - 13812.972: 96.5868% ( 30) 00:07:48.583 13812.972 - 13913.797: 96.7918% ( 32) 00:07:48.583 13913.797 - 14014.622: 97.0671% ( 43) 00:07:48.583 14014.622 - 14115.446: 97.6242% ( 87) 00:07:48.583 14115.446 - 14216.271: 98.0405% ( 65) 00:07:48.583 14216.271 - 14317.095: 98.3414% ( 47) 00:07:48.583 14317.095 - 14417.920: 98.5272% ( 29) 00:07:48.583 14417.920 - 14518.745: 98.6424% ( 18) 00:07:48.583 14518.745 - 14619.569: 98.7257% ( 13) 00:07:48.583 14619.569 - 14720.394: 98.7385% ( 2) 00:07:48.583 14720.394 - 14821.218: 98.7577% ( 3) 00:07:48.583 14821.218 - 14922.043: 98.7769% ( 3) 00:07:48.583 14922.043 - 15022.868: 98.7833% ( 1) 00:07:48.583 15022.868 - 15123.692: 98.8473% ( 10) 00:07:48.583 15123.692 - 15224.517: 99.1355% ( 45) 00:07:48.583 15224.517 - 15325.342: 99.1739% ( 6) 00:07:48.583 15325.342 - 15426.166: 99.1803% ( 1) 00:07:48.583 21878.942 - 21979.766: 99.2059% ( 4) 00:07:48.583 21979.766 - 22080.591: 99.2316% ( 4) 00:07:48.583 22080.591 - 22181.415: 99.2572% ( 4) 00:07:48.583 22181.415 - 22282.240: 99.2764% ( 3) 00:07:48.583 22282.240 - 22383.065: 99.3020% ( 4) 00:07:48.583 22383.065 - 22483.889: 99.3276% ( 4) 00:07:48.583 22483.889 - 22584.714: 99.3532% ( 4) 00:07:48.583 22584.714 - 22685.538: 99.3724% ( 3) 00:07:48.583 22685.538 - 22786.363: 99.3981% ( 4) 00:07:48.583 22786.363 - 22887.188: 99.4173% ( 3) 00:07:48.583 22887.188 - 22988.012: 99.4429% ( 4) 00:07:48.583 22988.012 - 23088.837: 99.4685% ( 4) 00:07:48.583 23088.837 - 23189.662: 99.4877% ( 3) 00:07:48.583 23189.662 - 23290.486: 99.5133% ( 4) 00:07:48.583 23290.486 - 23391.311: 99.5389% ( 4) 00:07:48.583 23391.311 - 23492.135: 99.5645% ( 4) 00:07:48.583 23492.135 - 23592.960: 99.5838% ( 3) 00:07:48.583 23592.960 - 23693.785: 99.5902% ( 1) 00:07:48.583 28432.542 - 28634.191: 99.6414% ( 8) 00:07:48.583 28634.191 - 28835.840: 99.6862% ( 7) 00:07:48.583 28835.840 - 29037.489: 99.7310% ( 7) 00:07:48.583 29037.489 - 29239.138: 99.7759% ( 7) 00:07:48.583 29239.138 - 29440.788: 99.8271% ( 8) 00:07:48.583 29440.788 - 29642.437: 99.8719% ( 7) 00:07:48.583 29642.437 - 29844.086: 99.9232% ( 8) 00:07:48.583 29844.086 - 30045.735: 99.9744% ( 8) 00:07:48.583 30045.735 - 30247.385: 100.0000% ( 4) 00:07:48.583 00:07:48.583 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.583 ============================================================================== 00:07:48.583 Range in us Cumulative IO count 00:07:48.583 5847.828 - 5873.034: 0.0191% ( 3) 00:07:48.583 5873.034 - 5898.240: 0.0255% ( 1) 00:07:48.583 5923.446 - 5948.652: 0.0383% ( 2) 00:07:48.583 5948.652 - 5973.858: 0.0510% ( 2) 00:07:48.583 5973.858 - 5999.065: 0.0702% ( 3) 00:07:48.583 5999.065 - 6024.271: 0.0829% ( 2) 00:07:48.583 6024.271 - 6049.477: 0.1467% ( 10) 00:07:48.583 6049.477 - 6074.683: 0.1913% ( 7) 00:07:48.583 6074.683 - 6099.889: 0.2615% ( 11) 00:07:48.583 6099.889 - 6125.095: 0.4145% ( 24) 00:07:48.583 6125.095 - 6150.302: 0.7270% ( 49) 00:07:48.583 6150.302 - 6175.508: 1.0842% ( 56) 00:07:48.583 6175.508 - 6200.714: 1.5880% ( 79) 00:07:48.583 6200.714 - 6225.920: 2.1301% ( 85) 00:07:48.583 6225.920 - 6251.126: 3.0676% ( 147) 00:07:48.583 6251.126 - 6276.332: 4.0370% ( 152) 00:07:48.583 6276.332 - 6301.538: 5.5102% ( 231) 00:07:48.583 6301.538 - 6326.745: 7.3469% ( 288) 00:07:48.583 6326.745 - 6351.951: 9.0497% ( 267) 00:07:48.583 6351.951 - 6377.157: 10.7334% ( 264) 00:07:48.583 6377.157 - 6402.363: 12.2768% ( 242) 00:07:48.583 6402.363 - 6427.569: 14.0051% ( 271) 00:07:48.583 6427.569 - 6452.775: 15.5676% ( 245) 00:07:48.583 6452.775 - 6503.188: 19.1390% ( 560) 00:07:48.583 6503.188 - 6553.600: 23.2207% ( 640) 00:07:48.583 6553.600 - 6604.012: 26.0714% ( 447) 00:07:48.583 6604.012 - 6654.425: 29.2666% ( 501) 00:07:48.583 6654.425 - 6704.837: 32.7423% ( 545) 00:07:48.583 6704.837 - 6755.249: 36.9133% ( 654) 00:07:48.583 6755.249 - 6805.662: 40.9439% ( 632) 00:07:48.583 6805.662 - 6856.074: 44.6365% ( 579) 00:07:48.583 6856.074 - 6906.486: 48.1250% ( 547) 00:07:48.583 6906.486 - 6956.898: 51.3010% ( 498) 00:07:48.583 6956.898 - 7007.311: 52.8763% ( 247) 00:07:48.584 7007.311 - 7057.723: 55.1849% ( 362) 00:07:48.584 7057.723 - 7108.135: 56.8176% ( 256) 00:07:48.584 7108.135 - 7158.548: 57.9783% ( 182) 00:07:48.584 7158.548 - 7208.960: 59.5791% ( 251) 00:07:48.584 7208.960 - 7259.372: 60.5102% ( 146) 00:07:48.584 7259.372 - 7309.785: 61.5051% ( 156) 00:07:48.584 7309.785 - 7360.197: 62.2768% ( 121) 00:07:48.584 7360.197 - 7410.609: 63.0230% ( 117) 00:07:48.584 7410.609 - 7461.022: 63.6352% ( 96) 00:07:48.584 7461.022 - 7511.434: 63.9477% ( 49) 00:07:48.584 7511.434 - 7561.846: 64.3814% ( 68) 00:07:48.584 7561.846 - 7612.258: 64.5918% ( 33) 00:07:48.584 7612.258 - 7662.671: 64.9426% ( 55) 00:07:48.584 7662.671 - 7713.083: 65.3444% ( 63) 00:07:48.584 7713.083 - 7763.495: 65.5867% ( 38) 00:07:48.584 7763.495 - 7813.908: 66.0268% ( 69) 00:07:48.584 7813.908 - 7864.320: 66.4286% ( 63) 00:07:48.584 7864.320 - 7914.732: 66.6901% ( 41) 00:07:48.584 7914.732 - 7965.145: 66.8941% ( 32) 00:07:48.584 7965.145 - 8015.557: 67.1110% ( 34) 00:07:48.584 8015.557 - 8065.969: 67.2832% ( 27) 00:07:48.584 8065.969 - 8116.382: 67.6148% ( 52) 00:07:48.584 8116.382 - 8166.794: 67.8125% ( 31) 00:07:48.584 8166.794 - 8217.206: 68.0357% ( 35) 00:07:48.584 8217.206 - 8267.618: 68.3227% ( 45) 00:07:48.584 8267.618 - 8318.031: 68.7309% ( 64) 00:07:48.584 8318.031 - 8368.443: 69.1518% ( 66) 00:07:48.584 8368.443 - 8418.855: 69.8214% ( 105) 00:07:48.584 8418.855 - 8469.268: 70.4847% ( 104) 00:07:48.584 8469.268 - 8519.680: 70.8163% ( 52) 00:07:48.584 8519.680 - 8570.092: 71.0523% ( 37) 00:07:48.584 8570.092 - 8620.505: 71.2819% ( 36) 00:07:48.584 8620.505 - 8670.917: 71.5370% ( 40) 00:07:48.584 8670.917 - 8721.329: 71.7793% ( 38) 00:07:48.584 8721.329 - 8771.742: 72.1110% ( 52) 00:07:48.584 8771.742 - 8822.154: 72.3724% ( 41) 00:07:48.584 8822.154 - 8872.566: 72.6020% ( 36) 00:07:48.584 8872.566 - 8922.978: 72.8444% ( 38) 00:07:48.584 8922.978 - 8973.391: 73.1441% ( 47) 00:07:48.584 8973.391 - 9023.803: 73.5204% ( 59) 00:07:48.584 9023.803 - 9074.215: 74.0179% ( 78) 00:07:48.584 9074.215 - 9124.628: 74.2985% ( 44) 00:07:48.584 9124.628 - 9175.040: 74.6429% ( 54) 00:07:48.584 9175.040 - 9225.452: 75.0191% ( 59) 00:07:48.584 9225.452 - 9275.865: 75.5357% ( 81) 00:07:48.584 9275.865 - 9326.277: 76.0268% ( 77) 00:07:48.584 9326.277 - 9376.689: 76.5497% ( 82) 00:07:48.584 9376.689 - 9427.102: 77.3214% ( 121) 00:07:48.584 9427.102 - 9477.514: 77.8189% ( 78) 00:07:48.584 9477.514 - 9527.926: 78.1696% ( 55) 00:07:48.584 9527.926 - 9578.338: 78.5140% ( 54) 00:07:48.584 9578.338 - 9628.751: 78.9541% ( 69) 00:07:48.584 9628.751 - 9679.163: 79.5408% ( 92) 00:07:48.584 9679.163 - 9729.575: 79.9809% ( 69) 00:07:48.584 9729.575 - 9779.988: 80.3125% ( 52) 00:07:48.584 9779.988 - 9830.400: 80.6250% ( 49) 00:07:48.584 9830.400 - 9880.812: 81.4222% ( 125) 00:07:48.584 9880.812 - 9931.225: 81.7793% ( 56) 00:07:48.584 9931.225 - 9981.637: 82.3724% ( 93) 00:07:48.584 9981.637 - 10032.049: 82.6276% ( 40) 00:07:48.584 10032.049 - 10082.462: 82.8444% ( 34) 00:07:48.584 10082.462 - 10132.874: 83.0357% ( 30) 00:07:48.584 10132.874 - 10183.286: 83.2462% ( 33) 00:07:48.584 10183.286 - 10233.698: 83.4311% ( 29) 00:07:48.584 10233.698 - 10284.111: 83.5778% ( 23) 00:07:48.584 10284.111 - 10334.523: 83.9541% ( 59) 00:07:48.584 10334.523 - 10384.935: 84.1454% ( 30) 00:07:48.584 10384.935 - 10435.348: 84.3240% ( 28) 00:07:48.584 10435.348 - 10485.760: 84.6556% ( 52) 00:07:48.584 10485.760 - 10536.172: 84.8661% ( 33) 00:07:48.584 10536.172 - 10586.585: 85.0702% ( 32) 00:07:48.584 10586.585 - 10636.997: 85.2679% ( 31) 00:07:48.584 10636.997 - 10687.409: 85.3508% ( 13) 00:07:48.584 10687.409 - 10737.822: 85.4337% ( 13) 00:07:48.584 10737.822 - 10788.234: 85.5293% ( 15) 00:07:48.584 10788.234 - 10838.646: 85.6441% ( 18) 00:07:48.584 10838.646 - 10889.058: 85.7526% ( 17) 00:07:48.584 10889.058 - 10939.471: 85.8801% ( 20) 00:07:48.584 10939.471 - 10989.883: 86.0587% ( 28) 00:07:48.584 10989.883 - 11040.295: 86.2564% ( 31) 00:07:48.584 11040.295 - 11090.708: 86.3265% ( 11) 00:07:48.584 11090.708 - 11141.120: 86.4031% ( 12) 00:07:48.584 11141.120 - 11191.532: 86.5434% ( 22) 00:07:48.584 11191.532 - 11241.945: 86.6964% ( 24) 00:07:48.584 11241.945 - 11292.357: 86.9962% ( 47) 00:07:48.584 11292.357 - 11342.769: 87.2385% ( 38) 00:07:48.584 11342.769 - 11393.182: 87.4171% ( 28) 00:07:48.584 11393.182 - 11443.594: 87.6020% ( 29) 00:07:48.584 11443.594 - 11494.006: 87.7806% ( 28) 00:07:48.584 11494.006 - 11544.418: 88.0038% ( 35) 00:07:48.584 11544.418 - 11594.831: 88.1633% ( 25) 00:07:48.584 11594.831 - 11645.243: 88.3673% ( 32) 00:07:48.584 11645.243 - 11695.655: 88.6416% ( 43) 00:07:48.584 11695.655 - 11746.068: 89.2347% ( 93) 00:07:48.584 11746.068 - 11796.480: 89.6939% ( 72) 00:07:48.584 11796.480 - 11846.892: 89.9298% ( 37) 00:07:48.584 11846.892 - 11897.305: 90.1849% ( 40) 00:07:48.584 11897.305 - 11947.717: 90.5038% ( 50) 00:07:48.584 11947.717 - 11998.129: 90.9184% ( 65) 00:07:48.584 11998.129 - 12048.542: 91.2628% ( 54) 00:07:48.584 12048.542 - 12098.954: 91.5625% ( 47) 00:07:48.584 12098.954 - 12149.366: 91.8559% ( 46) 00:07:48.584 12149.366 - 12199.778: 91.9834% ( 20) 00:07:48.584 12199.778 - 12250.191: 92.0727% ( 14) 00:07:48.584 12250.191 - 12300.603: 92.1684% ( 15) 00:07:48.584 12300.603 - 12351.015: 92.2449% ( 12) 00:07:48.584 12351.015 - 12401.428: 92.3214% ( 12) 00:07:48.584 12401.428 - 12451.840: 92.4298% ( 17) 00:07:48.584 12451.840 - 12502.252: 92.5255% ( 15) 00:07:48.584 12502.252 - 12552.665: 92.7168% ( 30) 00:07:48.584 12552.665 - 12603.077: 92.9401% ( 35) 00:07:48.584 12603.077 - 12653.489: 93.0548% ( 18) 00:07:48.584 12653.489 - 12703.902: 93.1569% ( 16) 00:07:48.584 12703.902 - 12754.314: 93.2526% ( 15) 00:07:48.584 12754.314 - 12804.726: 93.3610% ( 17) 00:07:48.584 12804.726 - 12855.138: 93.4375% ( 12) 00:07:48.584 12855.138 - 12905.551: 93.5459% ( 17) 00:07:48.584 12905.551 - 13006.375: 93.7309% ( 29) 00:07:48.584 13006.375 - 13107.200: 93.9222% ( 30) 00:07:48.584 13107.200 - 13208.025: 94.2474% ( 51) 00:07:48.584 13208.025 - 13308.849: 94.5536% ( 48) 00:07:48.584 13308.849 - 13409.674: 94.9872% ( 68) 00:07:48.584 13409.674 - 13510.498: 95.4464% ( 72) 00:07:48.584 13510.498 - 13611.323: 96.0587% ( 96) 00:07:48.584 13611.323 - 13712.148: 96.5370% ( 75) 00:07:48.584 13712.148 - 13812.972: 96.8431% ( 48) 00:07:48.584 13812.972 - 13913.797: 97.1939% ( 55) 00:07:48.584 13913.797 - 14014.622: 97.4872% ( 46) 00:07:48.584 14014.622 - 14115.446: 97.9719% ( 76) 00:07:48.584 14115.446 - 14216.271: 98.2334% ( 41) 00:07:48.584 14216.271 - 14317.095: 98.4184% ( 29) 00:07:48.584 14317.095 - 14417.920: 98.5140% ( 15) 00:07:48.584 14417.920 - 14518.745: 98.5906% ( 12) 00:07:48.584 14518.745 - 14619.569: 98.6416% ( 8) 00:07:48.584 14619.569 - 14720.394: 98.6926% ( 8) 00:07:48.584 14720.394 - 14821.218: 98.8010% ( 17) 00:07:48.584 14821.218 - 14922.043: 98.9987% ( 31) 00:07:48.584 14922.043 - 15022.868: 99.1199% ( 19) 00:07:48.584 15022.868 - 15123.692: 99.1582% ( 6) 00:07:48.584 15123.692 - 15224.517: 99.1837% ( 4) 00:07:48.584 15224.517 - 15325.342: 99.1964% ( 2) 00:07:48.584 15325.342 - 15426.166: 99.2219% ( 4) 00:07:48.584 15426.166 - 15526.991: 99.2474% ( 4) 00:07:48.584 15526.991 - 15627.815: 99.2666% ( 3) 00:07:48.584 15627.815 - 15728.640: 99.2921% ( 4) 00:07:48.584 15728.640 - 15829.465: 99.3176% ( 4) 00:07:48.584 15829.465 - 15930.289: 99.3367% ( 3) 00:07:48.584 15930.289 - 16031.114: 99.3622% ( 4) 00:07:48.584 16031.114 - 16131.938: 99.3878% ( 4) 00:07:48.584 16131.938 - 16232.763: 99.4133% ( 4) 00:07:48.584 16232.763 - 16333.588: 99.4388% ( 4) 00:07:48.584 16333.588 - 16434.412: 99.4579% ( 3) 00:07:48.584 16434.412 - 16535.237: 99.4834% ( 4) 00:07:48.584 16535.237 - 16636.062: 99.5089% ( 4) 00:07:48.584 16636.062 - 16736.886: 99.5344% ( 4) 00:07:48.584 16736.886 - 16837.711: 99.5599% ( 4) 00:07:48.584 16837.711 - 16938.535: 99.5855% ( 4) 00:07:48.584 16938.535 - 17039.360: 99.5918% ( 1) 00:07:48.584 21173.169 - 21273.994: 99.6046% ( 2) 00:07:48.584 21273.994 - 21374.818: 99.6301% ( 4) 00:07:48.584 21374.818 - 21475.643: 99.6492% ( 3) 00:07:48.584 21475.643 - 21576.468: 99.6747% ( 4) 00:07:48.584 21576.468 - 21677.292: 99.7003% ( 4) 00:07:48.584 21677.292 - 21778.117: 99.7258% ( 4) 00:07:48.584 21778.117 - 21878.942: 99.7449% ( 3) 00:07:48.584 21878.942 - 21979.766: 99.7704% ( 4) 00:07:48.584 21979.766 - 22080.591: 99.7959% ( 4) 00:07:48.584 22080.591 - 22181.415: 99.8151% ( 3) 00:07:48.584 22181.415 - 22282.240: 99.8406% ( 4) 00:07:48.584 22282.240 - 22383.065: 99.8661% ( 4) 00:07:48.584 22383.065 - 22483.889: 99.8916% ( 4) 00:07:48.584 22483.889 - 22584.714: 99.9171% ( 4) 00:07:48.584 22584.714 - 22685.538: 99.9362% ( 3) 00:07:48.584 22685.538 - 22786.363: 99.9617% ( 4) 00:07:48.584 22786.363 - 22887.188: 99.9872% ( 4) 00:07:48.584 22887.188 - 22988.012: 100.0000% ( 2) 00:07:48.584 00:07:48.584 23:09:20 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:48.584 00:07:48.584 real 0m2.535s 00:07:48.584 user 0m2.209s 00:07:48.584 sys 0m0.218s 00:07:48.584 23:09:20 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.584 23:09:20 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.584 ************************************ 00:07:48.584 END TEST nvme_perf 00:07:48.584 ************************************ 00:07:48.843 23:09:20 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:48.843 23:09:20 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:48.843 23:09:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.843 23:09:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.843 ************************************ 00:07:48.843 START TEST nvme_hello_world 00:07:48.843 ************************************ 00:07:48.843 23:09:20 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:48.843 Initializing NVMe Controllers 00:07:48.843 Attached to 0000:00:13.0 00:07:48.843 Namespace ID: 1 size: 1GB 00:07:48.843 Attached to 0000:00:10.0 00:07:48.843 Namespace ID: 1 size: 6GB 00:07:48.843 Attached to 0000:00:11.0 00:07:48.843 Namespace ID: 1 size: 5GB 00:07:48.843 Attached to 0000:00:12.0 00:07:48.843 Namespace ID: 1 size: 4GB 00:07:48.843 Namespace ID: 2 size: 4GB 00:07:48.843 Namespace ID: 3 size: 4GB 00:07:48.843 Initialization complete. 00:07:48.843 INFO: using host memory buffer for IO 00:07:48.843 Hello world! 00:07:48.843 INFO: using host memory buffer for IO 00:07:48.843 Hello world! 00:07:48.843 INFO: using host memory buffer for IO 00:07:48.843 Hello world! 00:07:48.843 INFO: using host memory buffer for IO 00:07:48.843 Hello world! 00:07:48.843 INFO: using host memory buffer for IO 00:07:48.843 Hello world! 00:07:48.843 INFO: using host memory buffer for IO 00:07:48.843 Hello world! 00:07:48.843 00:07:48.843 real 0m0.226s 00:07:48.843 user 0m0.078s 00:07:48.843 sys 0m0.101s 00:07:48.843 23:09:21 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.843 23:09:21 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:48.843 ************************************ 00:07:48.843 END TEST nvme_hello_world 00:07:48.843 ************************************ 00:07:49.105 23:09:21 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:49.105 23:09:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.105 23:09:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.105 23:09:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.105 ************************************ 00:07:49.105 START TEST nvme_sgl 00:07:49.105 ************************************ 00:07:49.105 23:09:21 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:49.105 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:49.105 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:49.105 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:49.105 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:49.105 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:49.105 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:49.105 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:49.105 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:49.105 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:49.105 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:49.105 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:49.370 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:49.370 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:49.370 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:49.370 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:49.370 NVMe Readv/Writev Request test 00:07:49.370 Attached to 0000:00:13.0 00:07:49.370 Attached to 0000:00:10.0 00:07:49.370 Attached to 0000:00:11.0 00:07:49.370 Attached to 0000:00:12.0 00:07:49.370 0000:00:10.0: build_io_request_2 test passed 00:07:49.370 0000:00:10.0: build_io_request_4 test passed 00:07:49.370 0000:00:10.0: build_io_request_5 test passed 00:07:49.370 0000:00:10.0: build_io_request_6 test passed 00:07:49.370 0000:00:10.0: build_io_request_7 test passed 00:07:49.370 0000:00:10.0: build_io_request_10 test passed 00:07:49.370 0000:00:11.0: build_io_request_2 test passed 00:07:49.370 0000:00:11.0: build_io_request_4 test passed 00:07:49.370 0000:00:11.0: build_io_request_5 test passed 00:07:49.370 0000:00:11.0: build_io_request_6 test passed 00:07:49.370 0000:00:11.0: build_io_request_7 test passed 00:07:49.370 0000:00:11.0: build_io_request_10 test passed 00:07:49.370 Cleaning up... 00:07:49.370 00:07:49.370 real 0m0.298s 00:07:49.370 user 0m0.152s 00:07:49.370 sys 0m0.105s 00:07:49.370 23:09:21 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.370 23:09:21 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:49.370 ************************************ 00:07:49.370 END TEST nvme_sgl 00:07:49.370 ************************************ 00:07:49.370 23:09:21 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:49.370 23:09:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.370 23:09:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.370 23:09:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.370 ************************************ 00:07:49.370 START TEST nvme_e2edp 00:07:49.370 ************************************ 00:07:49.370 23:09:21 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:49.630 NVMe Write/Read with End-to-End data protection test 00:07:49.630 Attached to 0000:00:13.0 00:07:49.630 Attached to 0000:00:10.0 00:07:49.630 Attached to 0000:00:11.0 00:07:49.630 Attached to 0000:00:12.0 00:07:49.630 Cleaning up... 00:07:49.630 00:07:49.630 real 0m0.216s 00:07:49.630 user 0m0.064s 00:07:49.630 sys 0m0.101s 00:07:49.630 23:09:21 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.630 ************************************ 00:07:49.630 END TEST nvme_e2edp 00:07:49.630 ************************************ 00:07:49.630 23:09:21 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:49.630 23:09:21 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:49.630 23:09:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.630 23:09:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.630 23:09:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.630 ************************************ 00:07:49.630 START TEST nvme_reserve 00:07:49.630 ************************************ 00:07:49.630 23:09:21 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:49.891 ===================================================== 00:07:49.891 NVMe Controller at PCI bus 0, device 19, function 0 00:07:49.891 ===================================================== 00:07:49.891 Reservations: Not Supported 00:07:49.891 ===================================================== 00:07:49.891 NVMe Controller at PCI bus 0, device 16, function 0 00:07:49.891 ===================================================== 00:07:49.891 Reservations: Not Supported 00:07:49.891 ===================================================== 00:07:49.891 NVMe Controller at PCI bus 0, device 17, function 0 00:07:49.891 ===================================================== 00:07:49.891 Reservations: Not Supported 00:07:49.891 ===================================================== 00:07:49.891 NVMe Controller at PCI bus 0, device 18, function 0 00:07:49.891 ===================================================== 00:07:49.891 Reservations: Not Supported 00:07:49.891 Reservation test passed 00:07:49.891 00:07:49.891 real 0m0.234s 00:07:49.891 user 0m0.068s 00:07:49.891 sys 0m0.112s 00:07:49.891 23:09:22 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.891 ************************************ 00:07:49.891 END TEST nvme_reserve 00:07:49.891 ************************************ 00:07:49.891 23:09:22 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:49.891 23:09:22 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:49.891 23:09:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.891 23:09:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.891 23:09:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.891 ************************************ 00:07:49.891 START TEST nvme_err_injection 00:07:49.891 ************************************ 00:07:49.891 23:09:22 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:50.167 NVMe Error Injection test 00:07:50.167 Attached to 0000:00:13.0 00:07:50.167 Attached to 0000:00:10.0 00:07:50.167 Attached to 0000:00:11.0 00:07:50.167 Attached to 0000:00:12.0 00:07:50.167 0000:00:10.0: get features failed as expected 00:07:50.167 0000:00:11.0: get features failed as expected 00:07:50.167 0000:00:12.0: get features failed as expected 00:07:50.167 0000:00:13.0: get features failed as expected 00:07:50.167 0000:00:13.0: get features successfully as expected 00:07:50.167 0000:00:10.0: get features successfully as expected 00:07:50.167 0000:00:11.0: get features successfully as expected 00:07:50.167 0000:00:12.0: get features successfully as expected 00:07:50.167 0000:00:12.0: read failed as expected 00:07:50.167 0000:00:13.0: read failed as expected 00:07:50.167 0000:00:10.0: read failed as expected 00:07:50.167 0000:00:11.0: read failed as expected 00:07:50.167 0000:00:12.0: read successfully as expected 00:07:50.167 0000:00:13.0: read successfully as expected 00:07:50.167 0000:00:10.0: read successfully as expected 00:07:50.167 0000:00:11.0: read successfully as expected 00:07:50.167 Cleaning up... 00:07:50.167 00:07:50.167 real 0m0.240s 00:07:50.167 user 0m0.095s 00:07:50.167 sys 0m0.098s 00:07:50.167 23:09:22 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.167 23:09:22 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:50.167 ************************************ 00:07:50.167 END TEST nvme_err_injection 00:07:50.167 ************************************ 00:07:50.167 23:09:22 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:50.167 23:09:22 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:50.167 23:09:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.167 23:09:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.167 ************************************ 00:07:50.167 START TEST nvme_overhead 00:07:50.167 ************************************ 00:07:50.167 23:09:22 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:51.547 Initializing NVMe Controllers 00:07:51.547 Attached to 0000:00:13.0 00:07:51.547 Attached to 0000:00:10.0 00:07:51.547 Attached to 0000:00:11.0 00:07:51.547 Attached to 0000:00:12.0 00:07:51.547 Initialization complete. Launching workers. 00:07:51.547 submit (in ns) avg, min, max = 12510.0, 11579.2, 239551.5 00:07:51.547 complete (in ns) avg, min, max = 8263.6, 7790.0, 129530.8 00:07:51.547 00:07:51.547 Submit histogram 00:07:51.547 ================ 00:07:51.547 Range in us Cumulative Count 00:07:51.547 11.569 - 11.618: 0.0397% ( 4) 00:07:51.547 11.618 - 11.668: 0.1289% ( 9) 00:07:51.547 11.668 - 11.717: 0.2975% ( 17) 00:07:51.547 11.717 - 11.766: 0.5256% ( 23) 00:07:51.547 11.766 - 11.815: 1.1305% ( 61) 00:07:51.547 11.815 - 11.865: 2.4990% ( 138) 00:07:51.547 11.865 - 11.914: 5.2757% ( 280) 00:07:51.547 11.914 - 11.963: 9.0242% ( 378) 00:07:51.547 11.963 - 12.012: 14.4982% ( 552) 00:07:51.547 12.012 - 12.062: 21.7176% ( 728) 00:07:51.547 12.062 - 12.111: 30.8905% ( 925) 00:07:51.547 12.111 - 12.160: 40.1924% ( 938) 00:07:51.547 12.160 - 12.209: 50.0000% ( 989) 00:07:51.547 12.209 - 12.258: 58.3003% ( 837) 00:07:51.547 12.258 - 12.308: 65.1230% ( 688) 00:07:51.547 12.308 - 12.357: 70.3788% ( 530) 00:07:51.547 12.357 - 12.406: 74.7422% ( 440) 00:07:51.547 12.406 - 12.455: 78.5799% ( 387) 00:07:51.547 12.455 - 12.505: 81.9417% ( 339) 00:07:51.547 12.505 - 12.554: 84.6886% ( 277) 00:07:51.547 12.554 - 12.603: 86.6819% ( 201) 00:07:51.547 12.603 - 12.702: 89.7461% ( 309) 00:07:51.547 12.702 - 12.800: 91.3526% ( 162) 00:07:51.547 12.800 - 12.898: 92.4534% ( 111) 00:07:51.547 12.898 - 12.997: 93.0881% ( 64) 00:07:51.547 12.997 - 13.095: 93.5046% ( 42) 00:07:51.547 13.095 - 13.194: 93.6731% ( 17) 00:07:51.547 13.194 - 13.292: 93.8021% ( 13) 00:07:51.547 13.292 - 13.391: 93.8417% ( 4) 00:07:51.547 13.391 - 13.489: 93.9211% ( 8) 00:07:51.547 13.588 - 13.686: 93.9409% ( 2) 00:07:51.547 13.686 - 13.785: 93.9607% ( 2) 00:07:51.547 13.785 - 13.883: 93.9905% ( 3) 00:07:51.547 13.883 - 13.982: 94.0896% ( 10) 00:07:51.547 13.982 - 14.080: 94.2681% ( 18) 00:07:51.547 14.080 - 14.178: 94.4566% ( 19) 00:07:51.547 14.178 - 14.277: 94.6648% ( 21) 00:07:51.547 14.277 - 14.375: 94.9524% ( 29) 00:07:51.547 14.375 - 14.474: 95.2301% ( 28) 00:07:51.547 14.474 - 14.572: 95.3392% ( 11) 00:07:51.547 14.572 - 14.671: 95.5375% ( 20) 00:07:51.547 14.671 - 14.769: 95.6862% ( 15) 00:07:51.547 14.769 - 14.868: 95.8152% ( 13) 00:07:51.547 14.868 - 14.966: 95.9837% ( 17) 00:07:51.547 14.966 - 15.065: 96.1622% ( 18) 00:07:51.547 15.065 - 15.163: 96.2713% ( 11) 00:07:51.547 15.163 - 15.262: 96.4399% ( 17) 00:07:51.547 15.262 - 15.360: 96.6581% ( 22) 00:07:51.547 15.360 - 15.458: 96.8366% ( 18) 00:07:51.547 15.458 - 15.557: 96.9655% ( 13) 00:07:51.547 15.557 - 15.655: 97.0746% ( 11) 00:07:51.547 15.655 - 15.754: 97.2035% ( 13) 00:07:51.547 15.754 - 15.852: 97.3225% ( 12) 00:07:51.547 15.852 - 15.951: 97.3721% ( 5) 00:07:51.547 15.951 - 16.049: 97.4018% ( 3) 00:07:51.547 16.049 - 16.148: 97.4415% ( 4) 00:07:51.547 16.148 - 16.246: 97.5010% ( 6) 00:07:51.547 16.246 - 16.345: 97.5407% ( 4) 00:07:51.547 16.345 - 16.443: 97.5704% ( 3) 00:07:51.547 16.443 - 16.542: 97.6101% ( 4) 00:07:51.547 16.542 - 16.640: 97.6696% ( 6) 00:07:51.547 16.640 - 16.738: 97.7291% ( 6) 00:07:51.547 16.738 - 16.837: 97.7588% ( 3) 00:07:51.547 16.837 - 16.935: 97.8084% ( 5) 00:07:51.547 16.935 - 17.034: 97.8877% ( 8) 00:07:51.547 17.034 - 17.132: 97.9472% ( 6) 00:07:51.547 17.132 - 17.231: 97.9770% ( 3) 00:07:51.547 17.231 - 17.329: 97.9968% ( 2) 00:07:51.547 17.329 - 17.428: 98.0167% ( 2) 00:07:51.547 17.428 - 17.526: 98.0563% ( 4) 00:07:51.547 17.526 - 17.625: 98.0861% ( 3) 00:07:51.547 17.625 - 17.723: 98.0960% ( 1) 00:07:51.547 17.723 - 17.822: 98.1456% ( 5) 00:07:51.547 17.822 - 17.920: 98.2447% ( 10) 00:07:51.547 17.920 - 18.018: 98.3439% ( 10) 00:07:51.547 18.018 - 18.117: 98.4034% ( 6) 00:07:51.547 18.117 - 18.215: 98.4927% ( 9) 00:07:51.547 18.215 - 18.314: 98.6414% ( 15) 00:07:51.547 18.314 - 18.412: 98.7604% ( 12) 00:07:51.547 18.412 - 18.511: 98.8893% ( 13) 00:07:51.547 18.511 - 18.609: 99.0777% ( 19) 00:07:51.547 18.609 - 18.708: 99.1868% ( 11) 00:07:51.547 18.708 - 18.806: 99.2761% ( 9) 00:07:51.547 18.806 - 18.905: 99.3356% ( 6) 00:07:51.547 18.905 - 19.003: 99.4248% ( 9) 00:07:51.547 19.003 - 19.102: 99.4447% ( 2) 00:07:51.547 19.102 - 19.200: 99.5141% ( 7) 00:07:51.547 19.200 - 19.298: 99.5438% ( 3) 00:07:51.547 19.298 - 19.397: 99.5637% ( 2) 00:07:51.547 19.397 - 19.495: 99.6132% ( 5) 00:07:51.547 19.495 - 19.594: 99.6529% ( 4) 00:07:51.547 19.594 - 19.692: 99.6628% ( 1) 00:07:51.547 19.692 - 19.791: 99.6926% ( 3) 00:07:51.547 19.791 - 19.889: 99.7223% ( 3) 00:07:51.547 19.889 - 19.988: 99.7322% ( 1) 00:07:51.547 19.988 - 20.086: 99.7620% ( 3) 00:07:51.547 20.185 - 20.283: 99.7719% ( 1) 00:07:51.547 20.677 - 20.775: 99.8116% ( 4) 00:07:51.547 20.775 - 20.874: 99.8215% ( 1) 00:07:51.547 21.071 - 21.169: 99.8413% ( 2) 00:07:51.547 21.268 - 21.366: 99.8512% ( 1) 00:07:51.547 21.662 - 21.760: 99.8612% ( 1) 00:07:51.547 21.858 - 21.957: 99.8711% ( 1) 00:07:51.547 21.957 - 22.055: 99.8810% ( 1) 00:07:51.547 22.154 - 22.252: 99.9008% ( 2) 00:07:51.547 22.252 - 22.351: 99.9107% ( 1) 00:07:51.547 22.646 - 22.745: 99.9306% ( 2) 00:07:51.547 23.040 - 23.138: 99.9405% ( 1) 00:07:51.547 23.138 - 23.237: 99.9504% ( 1) 00:07:51.547 23.729 - 23.828: 99.9603% ( 1) 00:07:51.547 25.797 - 25.994: 99.9702% ( 1) 00:07:51.547 31.311 - 31.508: 99.9802% ( 1) 00:07:51.547 65.772 - 66.166: 99.9901% ( 1) 00:07:51.547 239.458 - 241.034: 100.0000% ( 1) 00:07:51.547 00:07:51.547 Complete histogram 00:07:51.547 ================== 00:07:51.547 Range in us Cumulative Count 00:07:51.547 7.778 - 7.828: 0.1488% ( 15) 00:07:51.547 7.828 - 7.877: 1.6858% ( 155) 00:07:51.547 7.877 - 7.926: 7.2194% ( 558) 00:07:51.547 7.926 - 7.975: 11.7810% ( 460) 00:07:51.547 7.975 - 8.025: 17.7311% ( 600) 00:07:51.547 8.025 - 8.074: 31.8425% ( 1423) 00:07:51.547 8.074 - 8.123: 48.7604% ( 1706) 00:07:51.547 8.123 - 8.172: 61.6720% ( 1302) 00:07:51.547 8.172 - 8.222: 72.3919% ( 1081) 00:07:51.547 8.222 - 8.271: 81.2277% ( 891) 00:07:51.547 8.271 - 8.320: 86.6819% ( 550) 00:07:51.547 8.320 - 8.369: 90.5395% ( 389) 00:07:51.547 8.369 - 8.418: 93.3261% ( 281) 00:07:51.547 8.418 - 8.468: 95.3491% ( 204) 00:07:51.547 8.468 - 8.517: 96.4597% ( 112) 00:07:51.547 8.517 - 8.566: 97.2233% ( 77) 00:07:51.547 8.566 - 8.615: 97.5803% ( 36) 00:07:51.547 8.615 - 8.665: 97.9076% ( 33) 00:07:51.547 8.665 - 8.714: 98.1158% ( 21) 00:07:51.547 8.714 - 8.763: 98.2051% ( 9) 00:07:51.547 8.763 - 8.812: 98.2646% ( 6) 00:07:51.547 8.812 - 8.862: 98.2943% ( 3) 00:07:51.548 8.862 - 8.911: 98.3042% ( 1) 00:07:51.548 8.911 - 8.960: 98.3340% ( 3) 00:07:51.548 8.960 - 9.009: 98.3439% ( 1) 00:07:51.548 9.058 - 9.108: 98.3538% ( 1) 00:07:51.548 9.157 - 9.206: 98.3637% ( 1) 00:07:51.548 9.452 - 9.502: 98.3737% ( 1) 00:07:51.548 9.502 - 9.551: 98.3836% ( 1) 00:07:51.548 9.600 - 9.649: 98.3935% ( 1) 00:07:51.548 9.748 - 9.797: 98.4133% ( 2) 00:07:51.548 9.797 - 9.846: 98.4232% ( 1) 00:07:51.548 9.846 - 9.895: 98.4431% ( 2) 00:07:51.548 9.994 - 10.043: 98.4530% ( 1) 00:07:51.548 10.043 - 10.092: 98.4728% ( 2) 00:07:51.548 10.388 - 10.437: 98.4827% ( 1) 00:07:51.548 11.077 - 11.126: 98.4927% ( 1) 00:07:51.548 11.225 - 11.274: 98.5026% ( 1) 00:07:51.548 11.914 - 11.963: 98.5125% ( 1) 00:07:51.548 12.209 - 12.258: 98.5323% ( 2) 00:07:51.548 12.406 - 12.455: 98.5422% ( 1) 00:07:51.548 12.554 - 12.603: 98.5522% ( 1) 00:07:51.548 12.997 - 13.095: 98.5621% ( 1) 00:07:51.548 13.095 - 13.194: 98.5819% ( 2) 00:07:51.548 13.194 - 13.292: 98.6117% ( 3) 00:07:51.548 13.292 - 13.391: 98.6216% ( 1) 00:07:51.548 13.391 - 13.489: 98.6315% ( 1) 00:07:51.548 13.489 - 13.588: 98.6612% ( 3) 00:07:51.548 13.588 - 13.686: 98.6712% ( 1) 00:07:51.548 13.686 - 13.785: 98.7505% ( 8) 00:07:51.548 13.785 - 13.883: 98.7902% ( 4) 00:07:51.548 13.883 - 13.982: 98.8497% ( 6) 00:07:51.548 13.982 - 14.080: 98.8992% ( 5) 00:07:51.548 14.080 - 14.178: 98.9389% ( 4) 00:07:51.548 14.178 - 14.277: 98.9984% ( 6) 00:07:51.548 14.277 - 14.375: 99.0877% ( 9) 00:07:51.548 14.375 - 14.474: 99.1273% ( 4) 00:07:51.548 14.474 - 14.572: 99.2562% ( 13) 00:07:51.548 14.572 - 14.671: 99.3058% ( 5) 00:07:51.548 14.671 - 14.769: 99.3554% ( 5) 00:07:51.548 14.769 - 14.868: 99.4447% ( 9) 00:07:51.548 14.868 - 14.966: 99.5537% ( 11) 00:07:51.548 14.966 - 15.065: 99.6132% ( 6) 00:07:51.548 15.065 - 15.163: 99.6430% ( 3) 00:07:51.548 15.163 - 15.262: 99.6926% ( 5) 00:07:51.548 15.262 - 15.360: 99.7322% ( 4) 00:07:51.548 15.360 - 15.458: 99.7818% ( 5) 00:07:51.548 15.458 - 15.557: 99.8017% ( 2) 00:07:51.548 15.557 - 15.655: 99.8215% ( 2) 00:07:51.548 15.655 - 15.754: 99.8512% ( 3) 00:07:51.548 15.852 - 15.951: 99.8612% ( 1) 00:07:51.548 16.049 - 16.148: 99.8711% ( 1) 00:07:51.548 16.148 - 16.246: 99.8810% ( 1) 00:07:51.548 16.443 - 16.542: 99.8909% ( 1) 00:07:51.548 16.935 - 17.034: 99.9008% ( 1) 00:07:51.548 17.231 - 17.329: 99.9107% ( 1) 00:07:51.548 17.526 - 17.625: 99.9207% ( 1) 00:07:51.548 17.723 - 17.822: 99.9306% ( 1) 00:07:51.548 17.920 - 18.018: 99.9405% ( 1) 00:07:51.548 18.412 - 18.511: 99.9504% ( 1) 00:07:51.548 19.791 - 19.889: 99.9603% ( 1) 00:07:51.548 21.071 - 21.169: 99.9702% ( 1) 00:07:51.548 43.126 - 43.323: 99.9802% ( 1) 00:07:51.548 64.591 - 64.985: 99.9901% ( 1) 00:07:51.548 129.182 - 129.969: 100.0000% ( 1) 00:07:51.548 00:07:51.548 00:07:51.548 real 0m1.222s 00:07:51.548 user 0m1.074s 00:07:51.548 sys 0m0.101s 00:07:51.548 23:09:23 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.548 23:09:23 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:51.548 ************************************ 00:07:51.548 END TEST nvme_overhead 00:07:51.548 ************************************ 00:07:51.548 23:09:23 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:51.548 23:09:23 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:51.548 23:09:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.548 23:09:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.548 ************************************ 00:07:51.548 START TEST nvme_arbitration 00:07:51.548 ************************************ 00:07:51.548 23:09:23 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:54.823 Initializing NVMe Controllers 00:07:54.823 Attached to 0000:00:13.0 00:07:54.823 Attached to 0000:00:10.0 00:07:54.823 Attached to 0000:00:11.0 00:07:54.823 Attached to 0000:00:12.0 00:07:54.823 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:54.823 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:54.823 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:54.823 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:54.823 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:54.823 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:54.823 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:54.823 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:54.823 Initialization complete. Launching workers. 00:07:54.823 Starting thread on core 1 with urgent priority queue 00:07:54.823 Starting thread on core 2 with urgent priority queue 00:07:54.823 Starting thread on core 3 with urgent priority queue 00:07:54.823 Starting thread on core 0 with urgent priority queue 00:07:54.823 QEMU NVMe Ctrl (12343 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:07:54.823 QEMU NVMe Ctrl (12342 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:07:54.823 QEMU NVMe Ctrl (12340 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:54.823 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:54.823 QEMU NVMe Ctrl (12341 ) core 2: 789.33 IO/s 126.69 secs/100000 ios 00:07:54.823 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:54.823 ======================================================== 00:07:54.823 00:07:54.823 00:07:54.823 real 0m3.294s 00:07:54.823 user 0m9.223s 00:07:54.823 sys 0m0.120s 00:07:54.823 23:09:26 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.823 23:09:26 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:54.823 ************************************ 00:07:54.823 END TEST nvme_arbitration 00:07:54.823 ************************************ 00:07:54.823 23:09:27 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:54.823 23:09:27 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.823 23:09:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.823 23:09:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.823 ************************************ 00:07:54.823 START TEST nvme_single_aen 00:07:54.823 ************************************ 00:07:54.823 23:09:27 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:55.081 Asynchronous Event Request test 00:07:55.081 Attached to 0000:00:13.0 00:07:55.081 Attached to 0000:00:10.0 00:07:55.081 Attached to 0000:00:11.0 00:07:55.082 Attached to 0000:00:12.0 00:07:55.082 Reset controller to setup AER completions for this process 00:07:55.082 Registering asynchronous event callbacks... 00:07:55.082 Getting orig temperature thresholds of all controllers 00:07:55.082 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.082 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.082 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.082 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:55.082 Setting all controllers temperature threshold low to trigger AER 00:07:55.082 Waiting for all controllers temperature threshold to be set lower 00:07:55.082 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.082 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:55.082 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.082 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:55.082 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.082 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:55.082 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:55.082 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:55.082 Waiting for all controllers to trigger AER and reset threshold 00:07:55.082 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.082 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.082 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.082 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.082 Cleaning up... 00:07:55.082 00:07:55.082 real 0m0.222s 00:07:55.082 user 0m0.079s 00:07:55.082 sys 0m0.100s 00:07:55.082 23:09:27 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.082 23:09:27 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:55.082 ************************************ 00:07:55.082 END TEST nvme_single_aen 00:07:55.082 ************************************ 00:07:55.082 23:09:27 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:55.082 23:09:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.082 23:09:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.082 23:09:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.082 ************************************ 00:07:55.082 START TEST nvme_doorbell_aers 00:07:55.082 ************************************ 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:55.082 23:09:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:55.340 [2024-11-25 23:09:27.546457] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:05.301 Executing: test_write_invalid_db 00:08:05.301 Waiting for AER completion... 00:08:05.301 Failure: test_write_invalid_db 00:08:05.301 00:08:05.301 Executing: test_invalid_db_write_overflow_sq 00:08:05.301 Waiting for AER completion... 00:08:05.301 Failure: test_invalid_db_write_overflow_sq 00:08:05.301 00:08:05.301 Executing: test_invalid_db_write_overflow_cq 00:08:05.301 Waiting for AER completion... 00:08:05.301 Failure: test_invalid_db_write_overflow_cq 00:08:05.301 00:08:05.301 23:09:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:05.301 23:09:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:05.302 [2024-11-25 23:09:37.571036] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:15.279 Executing: test_write_invalid_db 00:08:15.279 Waiting for AER completion... 00:08:15.279 Failure: test_write_invalid_db 00:08:15.279 00:08:15.279 Executing: test_invalid_db_write_overflow_sq 00:08:15.279 Waiting for AER completion... 00:08:15.279 Failure: test_invalid_db_write_overflow_sq 00:08:15.279 00:08:15.279 Executing: test_invalid_db_write_overflow_cq 00:08:15.279 Waiting for AER completion... 00:08:15.279 Failure: test_invalid_db_write_overflow_cq 00:08:15.279 00:08:15.279 23:09:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:15.279 23:09:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:15.279 [2024-11-25 23:09:47.622826] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:25.251 Executing: test_write_invalid_db 00:08:25.251 Waiting for AER completion... 00:08:25.251 Failure: test_write_invalid_db 00:08:25.251 00:08:25.251 Executing: test_invalid_db_write_overflow_sq 00:08:25.251 Waiting for AER completion... 00:08:25.251 Failure: test_invalid_db_write_overflow_sq 00:08:25.251 00:08:25.251 Executing: test_invalid_db_write_overflow_cq 00:08:25.251 Waiting for AER completion... 00:08:25.251 Failure: test_invalid_db_write_overflow_cq 00:08:25.251 00:08:25.251 23:09:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.251 23:09:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:25.508 [2024-11-25 23:09:57.660487] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 Executing: test_write_invalid_db 00:08:35.495 Waiting for AER completion... 00:08:35.495 Failure: test_write_invalid_db 00:08:35.495 00:08:35.495 Executing: test_invalid_db_write_overflow_sq 00:08:35.495 Waiting for AER completion... 00:08:35.495 Failure: test_invalid_db_write_overflow_sq 00:08:35.495 00:08:35.495 Executing: test_invalid_db_write_overflow_cq 00:08:35.495 Waiting for AER completion... 00:08:35.495 Failure: test_invalid_db_write_overflow_cq 00:08:35.495 00:08:35.495 00:08:35.495 real 0m40.210s 00:08:35.495 user 0m34.249s 00:08:35.495 sys 0m5.574s 00:08:35.495 23:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.495 23:10:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:35.495 ************************************ 00:08:35.495 END TEST nvme_doorbell_aers 00:08:35.495 ************************************ 00:08:35.495 23:10:07 nvme -- nvme/nvme.sh@97 -- # uname 00:08:35.495 23:10:07 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:35.495 23:10:07 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:35.495 23:10:07 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:35.495 23:10:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.495 23:10:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.495 ************************************ 00:08:35.495 START TEST nvme_multi_aen 00:08:35.495 ************************************ 00:08:35.495 23:10:07 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:35.495 [2024-11-25 23:10:07.719699] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.719883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.719950] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.721510] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.721646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.721717] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.722795] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.722895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.722965] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.724007] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.724123] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 [2024-11-25 23:10:07.724188] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63238) is not found. Dropping the request. 00:08:35.495 Child process pid: 63760 00:08:35.753 [Child] Asynchronous Event Request test 00:08:35.753 [Child] Attached to 0000:00:13.0 00:08:35.753 [Child] Attached to 0000:00:10.0 00:08:35.753 [Child] Attached to 0000:00:11.0 00:08:35.753 [Child] Attached to 0000:00:12.0 00:08:35.753 [Child] Registering asynchronous event callbacks... 00:08:35.753 [Child] Getting orig temperature thresholds of all controllers 00:08:35.753 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:35.753 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.753 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.753 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.753 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.753 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.753 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.753 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.753 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.753 [Child] Cleaning up... 00:08:35.753 Asynchronous Event Request test 00:08:35.753 Attached to 0000:00:13.0 00:08:35.753 Attached to 0000:00:10.0 00:08:35.753 Attached to 0000:00:11.0 00:08:35.753 Attached to 0000:00:12.0 00:08:35.753 Reset controller to setup AER completions for this process 00:08:35.753 Registering asynchronous event callbacks... 00:08:35.753 Getting orig temperature thresholds of all controllers 00:08:35.753 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.753 Setting all controllers temperature threshold low to trigger AER 00:08:35.753 Waiting for all controllers temperature threshold to be set lower 00:08:35.753 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.754 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:35.754 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.754 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:35.754 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.754 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:35.754 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.754 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:35.754 Waiting for all controllers to trigger AER and reset threshold 00:08:35.754 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.754 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.754 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.754 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.754 Cleaning up... 00:08:35.754 ************************************ 00:08:35.754 END TEST nvme_multi_aen 00:08:35.754 ************************************ 00:08:35.754 00:08:35.754 real 0m0.452s 00:08:35.754 user 0m0.145s 00:08:35.754 sys 0m0.198s 00:08:35.754 23:10:07 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.754 23:10:07 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:35.754 23:10:08 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:35.754 23:10:08 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:35.754 23:10:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.754 23:10:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.754 ************************************ 00:08:35.754 START TEST nvme_startup 00:08:35.754 ************************************ 00:08:35.754 23:10:08 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:36.011 Initializing NVMe Controllers 00:08:36.011 Attached to 0000:00:13.0 00:08:36.011 Attached to 0000:00:10.0 00:08:36.011 Attached to 0000:00:11.0 00:08:36.011 Attached to 0000:00:12.0 00:08:36.011 Initialization complete. 00:08:36.011 Time used:152944.953 (us). 00:08:36.012 00:08:36.012 real 0m0.213s 00:08:36.012 user 0m0.076s 00:08:36.012 sys 0m0.092s 00:08:36.012 23:10:08 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.012 ************************************ 00:08:36.012 END TEST nvme_startup 00:08:36.012 ************************************ 00:08:36.012 23:10:08 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:36.012 23:10:08 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:36.012 23:10:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.012 23:10:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.012 23:10:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.012 ************************************ 00:08:36.012 START TEST nvme_multi_secondary 00:08:36.012 ************************************ 00:08:36.012 23:10:08 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:36.012 23:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63816 00:08:36.012 23:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:36.012 23:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63817 00:08:36.012 23:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:36.012 23:10:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:39.385 Initializing NVMe Controllers 00:08:39.385 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.385 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.385 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.385 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.385 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:39.386 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:39.386 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:39.386 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:39.386 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:39.386 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:39.386 Initialization complete. Launching workers. 00:08:39.386 ======================================================== 00:08:39.386 Latency(us) 00:08:39.386 Device Information : IOPS MiB/s Average min max 00:08:39.386 PCIE (0000:00:13.0) NSID 1 from core 1: 5640.70 22.03 2836.06 745.74 11204.27 00:08:39.386 PCIE (0000:00:10.0) NSID 1 from core 1: 5640.70 22.03 2835.01 753.36 11989.75 00:08:39.386 PCIE (0000:00:11.0) NSID 1 from core 1: 5640.70 22.03 2836.04 768.57 11922.44 00:08:39.386 PCIE (0000:00:12.0) NSID 1 from core 1: 5640.70 22.03 2835.99 764.53 11271.72 00:08:39.386 PCIE (0000:00:12.0) NSID 2 from core 1: 5640.70 22.03 2835.96 751.39 9813.11 00:08:39.386 PCIE (0000:00:12.0) NSID 3 from core 1: 5640.70 22.03 2835.94 753.16 11114.72 00:08:39.386 ======================================================== 00:08:39.386 Total : 33844.18 132.20 2835.83 745.74 11989.75 00:08:39.386 00:08:39.386 Initializing NVMe Controllers 00:08:39.386 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.386 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.386 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.386 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.386 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:39.386 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:39.386 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:39.386 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:39.386 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:39.386 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:39.386 Initialization complete. Launching workers. 00:08:39.386 ======================================================== 00:08:39.386 Latency(us) 00:08:39.386 Device Information : IOPS MiB/s Average min max 00:08:39.386 PCIE (0000:00:13.0) NSID 1 from core 2: 2314.62 9.04 6911.61 926.19 29875.11 00:08:39.386 PCIE (0000:00:10.0) NSID 1 from core 2: 2314.62 9.04 6911.36 1011.87 36634.36 00:08:39.386 PCIE (0000:00:11.0) NSID 1 from core 2: 2314.62 9.04 6912.19 917.19 30550.90 00:08:39.386 PCIE (0000:00:12.0) NSID 1 from core 2: 2314.62 9.04 6911.53 1057.97 29285.62 00:08:39.386 PCIE (0000:00:12.0) NSID 2 from core 2: 2314.62 9.04 6912.32 1048.06 29074.94 00:08:39.386 PCIE (0000:00:12.0) NSID 3 from core 2: 2314.62 9.04 6912.28 984.28 33287.71 00:08:39.386 ======================================================== 00:08:39.386 Total : 13887.69 54.25 6911.88 917.19 36634.36 00:08:39.386 00:08:39.386 23:10:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63816 00:08:41.284 Initializing NVMe Controllers 00:08:41.284 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.284 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.284 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.284 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.284 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:41.284 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:41.284 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:41.284 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:41.284 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:41.284 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:41.284 Initialization complete. Launching workers. 00:08:41.285 ======================================================== 00:08:41.285 Latency(us) 00:08:41.285 Device Information : IOPS MiB/s Average min max 00:08:41.285 PCIE (0000:00:13.0) NSID 1 from core 0: 9622.19 37.59 1662.43 717.84 13084.00 00:08:41.285 PCIE (0000:00:10.0) NSID 1 from core 0: 9622.19 37.59 1661.50 692.54 12892.35 00:08:41.285 PCIE (0000:00:11.0) NSID 1 from core 0: 9622.19 37.59 1662.37 702.33 12839.23 00:08:41.285 PCIE (0000:00:12.0) NSID 1 from core 0: 9622.19 37.59 1662.34 633.21 13640.78 00:08:41.285 PCIE (0000:00:12.0) NSID 2 from core 0: 9622.19 37.59 1662.31 612.38 13618.73 00:08:41.285 PCIE (0000:00:12.0) NSID 3 from core 0: 9622.19 37.59 1662.29 590.09 13627.85 00:08:41.285 ======================================================== 00:08:41.285 Total : 57733.16 225.52 1662.21 590.09 13640.78 00:08:41.285 00:08:41.285 23:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63817 00:08:41.285 23:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63886 00:08:41.285 23:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:41.285 23:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63887 00:08:41.285 23:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:41.285 23:10:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:44.566 Initializing NVMe Controllers 00:08:44.566 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.566 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.566 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.566 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.566 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:44.566 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:44.566 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:44.566 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:44.566 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:44.566 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:44.566 Initialization complete. Launching workers. 00:08:44.566 ======================================================== 00:08:44.566 Latency(us) 00:08:44.566 Device Information : IOPS MiB/s Average min max 00:08:44.566 PCIE (0000:00:13.0) NSID 1 from core 1: 5170.36 20.20 3094.12 731.24 13778.57 00:08:44.566 PCIE (0000:00:10.0) NSID 1 from core 1: 5170.36 20.20 3092.98 716.73 13706.89 00:08:44.566 PCIE (0000:00:11.0) NSID 1 from core 1: 5170.36 20.20 3094.95 747.56 13110.52 00:08:44.566 PCIE (0000:00:12.0) NSID 1 from core 1: 5170.36 20.20 3095.35 746.57 13305.01 00:08:44.566 PCIE (0000:00:12.0) NSID 2 from core 1: 5170.36 20.20 3096.30 747.47 14377.88 00:08:44.566 PCIE (0000:00:12.0) NSID 3 from core 1: 5170.36 20.20 3097.12 737.61 12880.27 00:08:44.566 ======================================================== 00:08:44.566 Total : 31022.15 121.18 3095.14 716.73 14377.88 00:08:44.566 00:08:44.566 Initializing NVMe Controllers 00:08:44.566 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.566 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.566 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.566 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.566 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.566 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.566 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.566 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.566 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.566 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.566 Initialization complete. Launching workers. 00:08:44.566 ======================================================== 00:08:44.566 Latency(us) 00:08:44.566 Device Information : IOPS MiB/s Average min max 00:08:44.566 PCIE (0000:00:13.0) NSID 1 from core 0: 4927.36 19.25 3246.67 830.08 13394.08 00:08:44.566 PCIE (0000:00:10.0) NSID 1 from core 0: 4927.36 19.25 3245.78 930.47 13458.05 00:08:44.566 PCIE (0000:00:11.0) NSID 1 from core 0: 4927.36 19.25 3247.19 889.46 12281.98 00:08:44.566 PCIE (0000:00:12.0) NSID 1 from core 0: 4927.36 19.25 3247.15 844.02 12866.51 00:08:44.566 PCIE (0000:00:12.0) NSID 2 from core 0: 4927.36 19.25 3247.60 830.81 12431.46 00:08:44.566 PCIE (0000:00:12.0) NSID 3 from core 0: 4927.36 19.25 3247.54 823.03 13263.77 00:08:44.566 ======================================================== 00:08:44.566 Total : 29564.17 115.49 3246.99 823.03 13458.05 00:08:44.566 00:08:47.110 Initializing NVMe Controllers 00:08:47.110 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.110 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.110 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.110 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.110 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:47.110 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:47.110 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:47.110 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:47.110 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:47.110 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:47.110 Initialization complete. Launching workers. 00:08:47.110 ======================================================== 00:08:47.110 Latency(us) 00:08:47.110 Device Information : IOPS MiB/s Average min max 00:08:47.110 PCIE (0000:00:13.0) NSID 1 from core 2: 2112.71 8.25 7572.40 1164.39 32905.11 00:08:47.110 PCIE (0000:00:10.0) NSID 1 from core 2: 2112.71 8.25 7571.36 1142.01 32478.68 00:08:47.110 PCIE (0000:00:11.0) NSID 1 from core 2: 2112.71 8.25 7572.96 1148.06 29712.17 00:08:47.110 PCIE (0000:00:12.0) NSID 1 from core 2: 2112.71 8.25 7572.82 1242.51 31357.18 00:08:47.110 PCIE (0000:00:12.0) NSID 2 from core 2: 2109.52 8.24 7584.17 1105.27 28649.48 00:08:47.110 PCIE (0000:00:12.0) NSID 3 from core 2: 2112.71 8.25 7572.57 1135.82 35335.15 00:08:47.110 ======================================================== 00:08:47.110 Total : 12673.09 49.50 7574.38 1105.27 35335.15 00:08:47.110 00:08:47.110 ************************************ 00:08:47.110 END TEST nvme_multi_secondary 00:08:47.110 ************************************ 00:08:47.110 23:10:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63886 00:08:47.110 23:10:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63887 00:08:47.110 00:08:47.110 real 0m10.817s 00:08:47.110 user 0m18.326s 00:08:47.110 sys 0m0.691s 00:08:47.110 23:10:19 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.110 23:10:19 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 23:10:19 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:47.110 23:10:19 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62835 ]] 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1094 -- # kill 62835 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1095 -- # wait 62835 00:08:47.110 [2024-11-25 23:10:19.134582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.134776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.134806] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.134823] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.137708] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.137761] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.137775] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.137787] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.140171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.140222] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.140235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.140247] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.142829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.142882] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.142895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 [2024-11-25 23:10:19.142908] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63759) is not found. Dropping the request. 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:47.110 23:10:19 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.110 23:10:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 ************************************ 00:08:47.110 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:47.110 ************************************ 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:47.110 * Looking for test storage... 00:08:47.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.110 --rc genhtml_branch_coverage=1 00:08:47.110 --rc genhtml_function_coverage=1 00:08:47.110 --rc genhtml_legend=1 00:08:47.110 --rc geninfo_all_blocks=1 00:08:47.110 --rc geninfo_unexecuted_blocks=1 00:08:47.110 00:08:47.110 ' 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.110 --rc genhtml_branch_coverage=1 00:08:47.110 --rc genhtml_function_coverage=1 00:08:47.110 --rc genhtml_legend=1 00:08:47.110 --rc geninfo_all_blocks=1 00:08:47.110 --rc geninfo_unexecuted_blocks=1 00:08:47.110 00:08:47.110 ' 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.110 --rc genhtml_branch_coverage=1 00:08:47.110 --rc genhtml_function_coverage=1 00:08:47.110 --rc genhtml_legend=1 00:08:47.110 --rc geninfo_all_blocks=1 00:08:47.110 --rc geninfo_unexecuted_blocks=1 00:08:47.110 00:08:47.110 ' 00:08:47.110 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.110 --rc genhtml_branch_coverage=1 00:08:47.110 --rc genhtml_function_coverage=1 00:08:47.110 --rc genhtml_legend=1 00:08:47.110 --rc geninfo_all_blocks=1 00:08:47.111 --rc geninfo_unexecuted_blocks=1 00:08:47.111 00:08:47.111 ' 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.111 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64054 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64054 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64054 ']' 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.370 23:10:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.370 [2024-11-25 23:10:19.563850] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:08:47.370 [2024-11-25 23:10:19.563968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64054 ] 00:08:47.370 [2024-11-25 23:10:19.732397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.632 [2024-11-25 23:10:19.833564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.632 [2024-11-25 23:10:19.833867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.632 [2024-11-25 23:10:19.834881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.632 [2024-11-25 23:10:19.835005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.208 nvme0n1 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_O4pZa.txt 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.208 true 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732576220 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64077 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:48.208 23:10:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:50.152 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:50.152 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.152 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.413 [2024-11-25 23:10:22.519765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:50.413 [2024-11-25 23:10:22.520097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:50.413 [2024-11-25 23:10:22.520134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:50.413 [2024-11-25 23:10:22.520147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:50.413 [2024-11-25 23:10:22.522144] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64077 00:08:50.413 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64077 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64077 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_O4pZa.txt 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:50.413 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_O4pZa.txt 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64054 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64054 ']' 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64054 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64054 00:08:50.414 killing process with pid 64054 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64054' 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64054 00:08:50.414 23:10:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64054 00:08:51.797 23:10:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:51.798 23:10:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:51.798 00:08:51.798 real 0m4.876s 00:08:51.798 user 0m17.239s 00:08:51.798 sys 0m0.516s 00:08:51.798 23:10:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.798 ************************************ 00:08:51.798 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:51.798 ************************************ 00:08:51.798 23:10:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.058 23:10:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:52.058 23:10:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:52.059 23:10:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.059 23:10:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.059 23:10:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.059 ************************************ 00:08:52.059 START TEST nvme_fio 00:08:52.059 ************************************ 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.059 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:52.059 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:52.321 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:52.321 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:52.600 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:52.600 23:10:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:52.600 23:10:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:52.600 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:52.600 fio-3.35 00:08:52.600 Starting 1 thread 00:08:57.882 00:08:57.882 test: (groupid=0, jobs=1): err= 0: pid=64211: Mon Nov 25 23:10:29 2024 00:08:57.882 read: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(152MiB/2034msec) 00:08:57.882 slat (nsec): min=3418, max=82857, avg=5477.31, stdev=2724.22 00:08:57.882 clat (usec): min=1203, max=35388, avg=3259.53, stdev=1457.66 00:08:57.882 lat (usec): min=1207, max=35393, avg=3265.01, stdev=1458.59 00:08:57.882 clat percentiles (usec): 00:08:57.882 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:57.882 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3032], 00:08:57.882 | 70.00th=[ 3294], 80.00th=[ 3851], 90.00th=[ 4883], 95.00th=[ 5735], 00:08:57.882 | 99.00th=[ 7373], 99.50th=[ 8455], 99.90th=[11338], 99.95th=[33817], 00:08:57.882 | 99.99th=[35390] 00:08:57.882 bw ( KiB/s): min=69968, max=84416, per=100.00%, avg=77586.00, stdev=7230.73, samples=4 00:08:57.882 iops : min=17492, max=21104, avg=19396.50, stdev=1807.68, samples=4 00:08:57.882 write: IOPS=19.0k, BW=74.4MiB/s (78.0MB/s)(151MiB/2034msec); 0 zone resets 00:08:57.882 slat (nsec): min=3547, max=75014, avg=5628.46, stdev=2663.69 00:08:57.882 clat (usec): min=1234, max=50928, avg=3429.25, stdev=2727.54 00:08:57.882 lat (usec): min=1239, max=50933, avg=3434.88, stdev=2728.01 00:08:57.882 clat percentiles (usec): 00:08:57.882 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:57.882 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3064], 00:08:57.882 | 70.00th=[ 3326], 80.00th=[ 3884], 90.00th=[ 4883], 95.00th=[ 5735], 00:08:57.882 | 99.00th=[ 8291], 99.50th=[27132], 99.90th=[39584], 99.95th=[43254], 00:08:57.882 | 99.99th=[50594] 00:08:57.882 bw ( KiB/s): min=68808, max=84304, per=100.00%, avg=77306.00, stdev=7610.59, samples=4 00:08:57.882 iops : min=17202, max=21076, avg=19326.50, stdev=1902.65, samples=4 00:08:57.882 lat (msec) : 2=1.01%, 4=80.59%, 10=17.99%, 20=0.07%, 50=0.32% 00:08:57.882 lat (msec) : 100=0.01% 00:08:57.882 cpu : usr=99.07%, sys=0.00%, ctx=5, majf=0, minf=606 00:08:57.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.882 issued rwts: total=38826,38747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.882 00:08:57.882 Run status group 0 (all jobs): 00:08:57.882 READ: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=152MiB (159MB), run=2034-2034msec 00:08:57.882 WRITE: bw=74.4MiB/s (78.0MB/s), 74.4MiB/s-74.4MiB/s (78.0MB/s-78.0MB/s), io=151MiB (159MB), run=2034-2034msec 00:08:57.882 ----------------------------------------------------- 00:08:57.882 Suppressions used: 00:08:57.882 count bytes template 00:08:57.882 1 32 /usr/src/fio/parse.c 00:08:57.882 1 8 libtcmalloc_minimal.so 00:08:57.882 ----------------------------------------------------- 00:08:57.882 00:08:57.882 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.882 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.882 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:57.882 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.143 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.143 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:58.405 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:58.405 23:10:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:58.405 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:58.406 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:58.406 23:10:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:58.406 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:58.406 fio-3.35 00:08:58.406 Starting 1 thread 00:09:03.740 00:09:03.740 test: (groupid=0, jobs=1): err= 0: pid=64277: Mon Nov 25 23:10:35 2024 00:09:03.740 read: IOPS=16.6k, BW=65.0MiB/s (68.2MB/s)(130MiB/2001msec) 00:09:03.740 slat (usec): min=4, max=520, avg= 6.51, stdev= 4.41 00:09:03.740 clat (usec): min=331, max=11325, avg=3803.41, stdev=1171.99 00:09:03.740 lat (usec): min=337, max=11345, avg=3809.92, stdev=1173.37 00:09:03.740 clat percentiles (usec): 00:09:03.740 | 1.00th=[ 2114], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3032], 00:09:03.740 | 30.00th=[ 3130], 40.00th=[ 3228], 50.00th=[ 3359], 60.00th=[ 3556], 00:09:03.740 | 70.00th=[ 3884], 80.00th=[ 4555], 90.00th=[ 5538], 95.00th=[ 6325], 00:09:03.740 | 99.00th=[ 7832], 99.50th=[ 8356], 99.90th=[10028], 99.95th=[10683], 00:09:03.740 | 99.99th=[11207] 00:09:03.741 bw ( KiB/s): min=64904, max=69824, per=100.00%, avg=67037.33, stdev=2524.23, samples=3 00:09:03.741 iops : min=16226, max=17456, avg=16759.33, stdev=631.06, samples=3 00:09:03.741 write: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec); 0 zone resets 00:09:03.741 slat (nsec): min=5019, max=82875, avg=6656.50, stdev=3359.23 00:09:03.741 clat (usec): min=289, max=11892, avg=3844.59, stdev=1164.76 00:09:03.741 lat (usec): min=295, max=11897, avg=3851.25, stdev=1166.07 00:09:03.741 clat percentiles (usec): 00:09:03.741 | 1.00th=[ 2147], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3064], 00:09:03.741 | 30.00th=[ 3195], 40.00th=[ 3294], 50.00th=[ 3392], 60.00th=[ 3589], 00:09:03.741 | 70.00th=[ 3916], 80.00th=[ 4621], 90.00th=[ 5538], 95.00th=[ 6390], 00:09:03.741 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9634], 99.95th=[10552], 00:09:03.741 | 99.99th=[11207] 00:09:03.741 bw ( KiB/s): min=65296, max=69688, per=100.00%, avg=67000.00, stdev=2355.55, samples=3 00:09:03.741 iops : min=16324, max=17422, avg=16750.00, stdev=588.89, samples=3 00:09:03.741 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:03.741 lat (msec) : 2=0.64%, 4=71.28%, 10=27.95%, 20=0.09% 00:09:03.741 cpu : usr=98.75%, sys=0.00%, ctx=3, majf=0, minf=607 00:09:03.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:03.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.741 issued rwts: total=33308,33373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.741 00:09:03.741 Run status group 0 (all jobs): 00:09:03.741 READ: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=130MiB (136MB), run=2001-2001msec 00:09:03.741 WRITE: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:09:03.741 ----------------------------------------------------- 00:09:03.741 Suppressions used: 00:09:03.741 count bytes template 00:09:03.741 1 32 /usr/src/fio/parse.c 00:09:03.741 1 8 libtcmalloc_minimal.so 00:09:03.741 ----------------------------------------------------- 00:09:03.741 00:09:03.741 23:10:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:03.741 23:10:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.741 23:10:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:03.741 23:10:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.001 23:10:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.001 23:10:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:04.261 23:10:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.261 23:10:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.261 23:10:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.261 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.261 fio-3.35 00:09:04.261 Starting 1 thread 00:09:07.569 00:09:07.569 test: (groupid=0, jobs=1): err= 0: pid=64338: Mon Nov 25 23:10:39 2024 00:09:07.569 read: IOPS=9394, BW=36.7MiB/s (38.5MB/s)(74.7MiB/2036msec) 00:09:07.569 slat (nsec): min=4888, max=92297, avg=7027.91, stdev=3991.29 00:09:07.569 clat (usec): min=1019, max=41852, avg=4681.55, stdev=2306.08 00:09:07.569 lat (usec): min=1025, max=41857, avg=4688.58, stdev=2306.56 00:09:07.569 clat percentiles (usec): 00:09:07.569 | 1.00th=[ 1696], 5.00th=[ 2606], 10.00th=[ 2933], 20.00th=[ 3195], 00:09:07.569 | 30.00th=[ 3392], 40.00th=[ 3752], 50.00th=[ 3982], 60.00th=[ 4490], 00:09:07.569 | 70.00th=[ 5276], 80.00th=[ 6128], 90.00th=[ 7308], 95.00th=[ 8586], 00:09:07.569 | 99.00th=[10421], 99.50th=[11600], 99.90th=[36963], 99.95th=[40633], 00:09:07.569 | 99.99th=[41681] 00:09:07.569 bw ( KiB/s): min=22120, max=60144, per=100.00%, avg=38194.00, stdev=18100.08, samples=4 00:09:07.569 iops : min= 5530, max=15036, avg=9548.50, stdev=4525.02, samples=4 00:09:07.569 write: IOPS=9403, BW=36.7MiB/s (38.5MB/s)(74.8MiB/2036msec); 0 zone resets 00:09:07.569 slat (usec): min=5, max=163, avg= 7.29, stdev= 4.19 00:09:07.569 clat (usec): min=1119, max=76581, avg=8886.83, stdev=11748.98 00:09:07.569 lat (usec): min=1125, max=76587, avg=8894.12, stdev=11749.02 00:09:07.569 clat percentiles (usec): 00:09:07.569 | 1.00th=[ 1909], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3261], 00:09:07.569 | 30.00th=[ 3523], 40.00th=[ 3884], 50.00th=[ 4178], 60.00th=[ 4948], 00:09:07.569 | 70.00th=[ 5932], 80.00th=[ 7308], 90.00th=[35390], 95.00th=[40633], 00:09:07.569 | 99.00th=[45351], 99.50th=[46924], 99.90th=[63177], 99.95th=[70779], 00:09:07.569 | 99.99th=[76022] 00:09:07.569 bw ( KiB/s): min=22248, max=60600, per=100.00%, avg=38098.00, stdev=18170.95, samples=4 00:09:07.570 iops : min= 5562, max=15150, avg=9524.50, stdev=4542.74, samples=4 00:09:07.570 lat (msec) : 2=1.69%, 4=45.87%, 10=45.02%, 20=1.13%, 50=6.16% 00:09:07.570 lat (msec) : 100=0.13% 00:09:07.570 cpu : usr=98.92%, sys=0.00%, ctx=5, majf=0, minf=607 00:09:07.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:09:07.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.570 issued rwts: total=19127,19146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.570 00:09:07.570 Run status group 0 (all jobs): 00:09:07.570 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=74.7MiB (78.3MB), run=2036-2036msec 00:09:07.570 WRITE: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=74.8MiB (78.4MB), run=2036-2036msec 00:09:07.570 ----------------------------------------------------- 00:09:07.570 Suppressions used: 00:09:07.570 count bytes template 00:09:07.570 1 32 /usr/src/fio/parse.c 00:09:07.570 1 8 libtcmalloc_minimal.so 00:09:07.570 ----------------------------------------------------- 00:09:07.570 00:09:07.570 23:10:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.570 23:10:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.570 23:10:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:07.570 23:10:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.570 23:10:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.570 23:10:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:07.831 23:10:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.831 23:10:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.831 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:07.832 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.832 23:10:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:08.094 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.094 fio-3.35 00:09:08.094 Starting 1 thread 00:09:14.672 00:09:14.672 test: (groupid=0, jobs=1): err= 0: pid=64393: Mon Nov 25 23:10:46 2024 00:09:14.672 read: IOPS=15.9k, BW=62.3MiB/s (65.3MB/s)(125MiB/2001msec) 00:09:14.672 slat (nsec): min=4868, max=96217, avg=7055.40, stdev=3937.88 00:09:14.672 clat (usec): min=223, max=11571, avg=3993.09, stdev=1327.76 00:09:14.672 lat (usec): min=229, max=11581, avg=4000.15, stdev=1329.35 00:09:14.672 clat percentiles (usec): 00:09:14.672 | 1.00th=[ 2409], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2933], 00:09:14.672 | 30.00th=[ 3163], 40.00th=[ 3359], 50.00th=[ 3589], 60.00th=[ 3851], 00:09:14.672 | 70.00th=[ 4293], 80.00th=[ 5080], 90.00th=[ 5997], 95.00th=[ 6652], 00:09:14.672 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[11076], 00:09:14.672 | 99.99th=[11469] 00:09:14.672 bw ( KiB/s): min=58576, max=62952, per=94.23%, avg=60088.00, stdev=2481.59, samples=3 00:09:14.672 iops : min=14644, max=15738, avg=15022.00, stdev=620.40, samples=3 00:09:14.672 write: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec); 0 zone resets 00:09:14.672 slat (nsec): min=5020, max=89793, avg=7355.48, stdev=3753.12 00:09:14.672 clat (usec): min=232, max=11511, avg=3996.69, stdev=1318.04 00:09:14.672 lat (usec): min=238, max=11517, avg=4004.04, stdev=1319.56 00:09:14.672 clat percentiles (usec): 00:09:14.672 | 1.00th=[ 2442], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2933], 00:09:14.672 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3589], 60.00th=[ 3851], 00:09:14.672 | 70.00th=[ 4293], 80.00th=[ 5080], 90.00th=[ 5997], 95.00th=[ 6652], 00:09:14.672 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9503], 99.95th=[10552], 00:09:14.672 | 99.99th=[11338] 00:09:14.672 bw ( KiB/s): min=57984, max=63264, per=93.58%, avg=59781.33, stdev=3016.60, samples=3 00:09:14.672 iops : min=14496, max=15816, avg=14945.33, stdev=754.15, samples=3 00:09:14.672 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:14.672 lat (msec) : 2=0.29%, 4=63.63%, 10=35.97%, 20=0.07% 00:09:14.672 cpu : usr=98.55%, sys=0.15%, ctx=7, majf=0, minf=605 00:09:14.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:14.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.672 issued rwts: total=31900,31958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.672 00:09:14.672 Run status group 0 (all jobs): 00:09:14.672 READ: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:14.672 WRITE: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:14.672 ----------------------------------------------------- 00:09:14.672 Suppressions used: 00:09:14.672 count bytes template 00:09:14.672 1 32 /usr/src/fio/parse.c 00:09:14.672 1 8 libtcmalloc_minimal.so 00:09:14.672 ----------------------------------------------------- 00:09:14.672 00:09:14.672 23:10:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:14.672 23:10:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:14.672 00:09:14.672 real 0m22.776s 00:09:14.672 user 0m15.175s 00:09:14.672 sys 0m12.432s 00:09:14.672 23:10:46 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.672 23:10:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:14.672 ************************************ 00:09:14.672 END TEST nvme_fio 00:09:14.672 ************************************ 00:09:14.672 00:09:14.672 real 1m33.495s 00:09:14.672 user 3m37.833s 00:09:14.672 sys 0m23.292s 00:09:14.672 23:10:47 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.672 23:10:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.672 ************************************ 00:09:14.672 END TEST nvme 00:09:14.672 ************************************ 00:09:14.932 23:10:47 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:14.932 23:10:47 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.932 23:10:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.932 23:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.932 23:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.932 ************************************ 00:09:14.932 START TEST nvme_scc 00:09:14.932 ************************************ 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:14.932 * Looking for test storage... 00:09:14.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:14.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.932 --rc genhtml_branch_coverage=1 00:09:14.932 --rc genhtml_function_coverage=1 00:09:14.932 --rc genhtml_legend=1 00:09:14.932 --rc geninfo_all_blocks=1 00:09:14.932 --rc geninfo_unexecuted_blocks=1 00:09:14.932 00:09:14.932 ' 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:14.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.932 --rc genhtml_branch_coverage=1 00:09:14.932 --rc genhtml_function_coverage=1 00:09:14.932 --rc genhtml_legend=1 00:09:14.932 --rc geninfo_all_blocks=1 00:09:14.932 --rc geninfo_unexecuted_blocks=1 00:09:14.932 00:09:14.932 ' 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:14.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.932 --rc genhtml_branch_coverage=1 00:09:14.932 --rc genhtml_function_coverage=1 00:09:14.932 --rc genhtml_legend=1 00:09:14.932 --rc geninfo_all_blocks=1 00:09:14.932 --rc geninfo_unexecuted_blocks=1 00:09:14.932 00:09:14.932 ' 00:09:14.932 23:10:47 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:14.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.932 --rc genhtml_branch_coverage=1 00:09:14.932 --rc genhtml_function_coverage=1 00:09:14.932 --rc genhtml_legend=1 00:09:14.932 --rc geninfo_all_blocks=1 00:09:14.932 --rc geninfo_unexecuted_blocks=1 00:09:14.932 00:09:14.932 ' 00:09:14.932 23:10:47 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.932 23:10:47 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:14.932 23:10:47 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:14.932 23:10:47 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:14.932 23:10:47 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.932 23:10:47 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.932 23:10:47 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.932 23:10:47 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.932 23:10:47 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.932 23:10:47 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:14.933 23:10:47 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:14.933 23:10:47 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:14.933 23:10:47 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.933 23:10:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:14.933 23:10:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:14.933 23:10:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:14.933 23:10:47 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:15.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.449 Waiting for block devices as requested 00:09:15.449 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.706 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.029 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:21.029 23:10:53 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:21.029 23:10:53 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:21.029 23:10:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.029 23:10:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:21.030 23:10:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.030 23:10:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:21.030 23:10:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.030 23:10:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.030 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.031 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:21.032 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:21.033 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.034 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.035 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:21.036 23:10:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.036 23:10:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:21.036 23:10:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.036 23:10:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:21.036 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.037 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.038 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.039 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.040 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:21.041 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:21.042 23:10:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.042 23:10:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:21.042 23:10:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.042 23:10:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:21.042 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.043 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.044 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.045 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:21.046 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.047 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:21.048 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.049 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:21.050 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:21.051 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:21.052 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.053 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:21.054 23:10:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:21.055 23:10:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:21.055 23:10:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:21.055 23:10:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.055 23:10:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.055 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:21.056 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.057 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.316 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:21.317 23:10:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:21.317 23:10:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:21.317 23:10:53 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:21.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.142 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.142 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.142 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.142 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.142 23:10:54 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:22.142 23:10:54 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:22.142 23:10:54 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.142 23:10:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:22.142 ************************************ 00:09:22.142 START TEST nvme_simple_copy 00:09:22.142 ************************************ 00:09:22.142 23:10:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:22.401 Initializing NVMe Controllers 00:09:22.401 Attaching to 0000:00:10.0 00:09:22.401 Controller supports SCC. Attached to 0000:00:10.0 00:09:22.401 Namespace ID: 1 size: 6GB 00:09:22.401 Initialization complete. 00:09:22.401 00:09:22.401 Controller QEMU NVMe Ctrl (12340 ) 00:09:22.401 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:22.401 Namespace Block Size:4096 00:09:22.401 Writing LBAs 0 to 63 with Random Data 00:09:22.401 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:22.401 LBAs matching Written Data: 64 00:09:22.401 00:09:22.401 real 0m0.253s 00:09:22.401 user 0m0.094s 00:09:22.401 sys 0m0.057s 00:09:22.401 23:10:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.401 23:10:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:22.401 ************************************ 00:09:22.401 END TEST nvme_simple_copy 00:09:22.401 ************************************ 00:09:22.401 00:09:22.401 real 0m7.621s 00:09:22.401 user 0m1.111s 00:09:22.401 sys 0m1.415s 00:09:22.401 23:10:54 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.401 23:10:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:22.401 ************************************ 00:09:22.401 END TEST nvme_scc 00:09:22.401 ************************************ 00:09:22.401 23:10:54 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:22.401 23:10:54 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:22.401 23:10:54 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:22.401 23:10:54 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:22.401 23:10:54 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:22.401 23:10:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.401 23:10:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.401 23:10:54 -- common/autotest_common.sh@10 -- # set +x 00:09:22.401 ************************************ 00:09:22.401 START TEST nvme_fdp 00:09:22.401 ************************************ 00:09:22.401 23:10:54 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:22.659 * Looking for test storage... 00:09:22.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.659 23:10:54 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.659 23:10:54 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.659 23:10:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.659 23:10:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.659 23:10:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.659 23:10:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.659 23:10:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.659 23:10:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.659 23:10:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.659 23:10:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:22.660 23:10:54 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.660 23:10:54 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.660 --rc genhtml_branch_coverage=1 00:09:22.660 --rc genhtml_function_coverage=1 00:09:22.660 --rc genhtml_legend=1 00:09:22.660 --rc geninfo_all_blocks=1 00:09:22.660 --rc geninfo_unexecuted_blocks=1 00:09:22.660 00:09:22.660 ' 00:09:22.660 23:10:54 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.660 --rc genhtml_branch_coverage=1 00:09:22.660 --rc genhtml_function_coverage=1 00:09:22.660 --rc genhtml_legend=1 00:09:22.660 --rc geninfo_all_blocks=1 00:09:22.660 --rc geninfo_unexecuted_blocks=1 00:09:22.660 00:09:22.660 ' 00:09:22.660 23:10:54 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.660 --rc genhtml_branch_coverage=1 00:09:22.660 --rc genhtml_function_coverage=1 00:09:22.660 --rc genhtml_legend=1 00:09:22.660 --rc geninfo_all_blocks=1 00:09:22.660 --rc geninfo_unexecuted_blocks=1 00:09:22.660 00:09:22.660 ' 00:09:22.660 23:10:54 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.660 --rc genhtml_branch_coverage=1 00:09:22.660 --rc genhtml_function_coverage=1 00:09:22.660 --rc genhtml_legend=1 00:09:22.660 --rc geninfo_all_blocks=1 00:09:22.660 --rc geninfo_unexecuted_blocks=1 00:09:22.660 00:09:22.660 ' 00:09:22.660 23:10:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.660 23:10:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.660 23:10:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.660 23:10:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.660 23:10:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.660 23:10:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:22.660 23:10:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:22.660 23:10:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:22.660 23:10:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.660 23:10:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.175 Waiting for block devices as requested 00:09:23.175 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.175 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.175 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.433 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.724 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.724 23:11:00 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:28.724 23:11:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.724 23:11:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.724 23:11:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.724 23:11:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.724 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.725 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:28.726 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.727 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.728 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:28.729 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:28.730 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.731 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.732 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:28.733 23:11:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.733 23:11:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.733 23:11:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.733 23:11:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.733 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.734 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.735 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:28.736 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.737 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.738 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:28.739 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:28.740 23:11:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:28.741 23:11:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.741 23:11:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.741 23:11:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.741 23:11:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.744 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.745 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.746 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:28.747 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.748 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.749 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.750 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.751 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.752 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.753 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.754 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.755 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:28.756 23:11:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:28.756 23:11:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.756 23:11:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.756 23:11:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:28.756 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:28.757 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:28.758 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:28.759 23:11:01 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:28.759 23:11:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.760 23:11:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:28.760 23:11:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:29.019 23:11:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:29.020 23:11:01 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:29.020 23:11:01 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:29.020 23:11:01 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.848 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.848 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.848 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.848 23:11:02 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:29.848 23:11:02 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.848 23:11:02 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.848 23:11:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:29.848 ************************************ 00:09:29.848 START TEST nvme_flexible_data_placement 00:09:29.848 ************************************ 00:09:29.848 23:11:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:30.108 Initializing NVMe Controllers 00:09:30.108 Attaching to 0000:00:13.0 00:09:30.108 Controller supports FDP Attached to 0000:00:13.0 00:09:30.108 Namespace ID: 1 Endurance Group ID: 1 00:09:30.108 Initialization complete. 00:09:30.108 00:09:30.108 ================================== 00:09:30.108 == FDP tests for Namespace: #01 == 00:09:30.108 ================================== 00:09:30.108 00:09:30.108 Get Feature: FDP: 00:09:30.108 ================= 00:09:30.108 Enabled: Yes 00:09:30.108 FDP configuration Index: 0 00:09:30.108 00:09:30.108 FDP configurations log page 00:09:30.108 =========================== 00:09:30.108 Number of FDP configurations: 1 00:09:30.108 Version: 0 00:09:30.108 Size: 112 00:09:30.108 FDP Configuration Descriptor: 0 00:09:30.108 Descriptor Size: 96 00:09:30.108 Reclaim Group Identifier format: 2 00:09:30.108 FDP Volatile Write Cache: Not Present 00:09:30.108 FDP Configuration: Valid 00:09:30.108 Vendor Specific Size: 0 00:09:30.108 Number of Reclaim Groups: 2 00:09:30.108 Number of Recalim Unit Handles: 8 00:09:30.108 Max Placement Identifiers: 128 00:09:30.108 Number of Namespaces Suppprted: 256 00:09:30.108 Reclaim unit Nominal Size: 6000000 bytes 00:09:30.108 Estimated Reclaim Unit Time Limit: Not Reported 00:09:30.108 RUH Desc #000: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #001: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #002: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #003: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #004: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #005: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #006: RUH Type: Initially Isolated 00:09:30.108 RUH Desc #007: RUH Type: Initially Isolated 00:09:30.108 00:09:30.108 FDP reclaim unit handle usage log page 00:09:30.108 ====================================== 00:09:30.108 Number of Reclaim Unit Handles: 8 00:09:30.108 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:30.108 RUH Usage Desc #001: RUH Attributes: Unused 00:09:30.108 RUH Usage Desc #002: RUH Attributes: Unused 00:09:30.108 RUH Usage Desc #003: RUH Attributes: Unused 00:09:30.108 RUH Usage Desc #004: RUH Attributes: Unused 00:09:30.108 RUH Usage Desc #005: RUH Attributes: Unused 00:09:30.108 RUH Usage Desc #006: RUH Attributes: Unused 00:09:30.108 RUH Usage Desc #007: RUH Attributes: Unused 00:09:30.108 00:09:30.108 FDP statistics log page 00:09:30.108 ======================= 00:09:30.108 Host bytes with metadata written: 994447360 00:09:30.108 Media bytes with metadata written: 994705408 00:09:30.108 Media bytes erased: 0 00:09:30.108 00:09:30.108 FDP Reclaim unit handle status 00:09:30.108 ============================== 00:09:30.108 Number of RUHS descriptors: 2 00:09:30.108 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000b9f 00:09:30.108 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:30.108 00:09:30.108 FDP write on placement id: 0 success 00:09:30.108 00:09:30.108 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:30.108 00:09:30.108 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:30.108 00:09:30.108 Get Feature: FDP Events for Placement handle: #0 00:09:30.108 ======================== 00:09:30.108 Number of FDP Events: 6 00:09:30.108 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:30.108 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:30.108 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:30.108 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:30.108 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:30.108 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:30.108 00:09:30.108 FDP events log page 00:09:30.108 =================== 00:09:30.108 Number of FDP events: 1 00:09:30.108 FDP Event #0: 00:09:30.108 Event Type: RU Not Written to Capacity 00:09:30.108 Placement Identifier: Valid 00:09:30.108 NSID: Valid 00:09:30.108 Location: Valid 00:09:30.108 Placement Identifier: 0 00:09:30.108 Event Timestamp: 5 00:09:30.108 Namespace Identifier: 1 00:09:30.108 Reclaim Group Identifier: 0 00:09:30.108 Reclaim Unit Handle Identifier: 0 00:09:30.108 00:09:30.108 FDP test passed 00:09:30.108 ************************************ 00:09:30.108 END TEST nvme_flexible_data_placement 00:09:30.108 ************************************ 00:09:30.108 00:09:30.108 real 0m0.231s 00:09:30.108 user 0m0.076s 00:09:30.108 sys 0m0.054s 00:09:30.108 23:11:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.109 23:11:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:30.109 ************************************ 00:09:30.109 END TEST nvme_fdp 00:09:30.109 ************************************ 00:09:30.109 00:09:30.109 real 0m7.633s 00:09:30.109 user 0m1.107s 00:09:30.109 sys 0m1.383s 00:09:30.109 23:11:02 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.109 23:11:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:30.109 23:11:02 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:30.109 23:11:02 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:30.109 23:11:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.109 23:11:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.109 23:11:02 -- common/autotest_common.sh@10 -- # set +x 00:09:30.109 ************************************ 00:09:30.109 START TEST nvme_rpc 00:09:30.109 ************************************ 00:09:30.109 23:11:02 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:30.109 * Looking for test storage... 00:09:30.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.368 23:11:02 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.368 --rc genhtml_branch_coverage=1 00:09:30.368 --rc genhtml_function_coverage=1 00:09:30.368 --rc genhtml_legend=1 00:09:30.368 --rc geninfo_all_blocks=1 00:09:30.368 --rc geninfo_unexecuted_blocks=1 00:09:30.368 00:09:30.368 ' 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.368 --rc genhtml_branch_coverage=1 00:09:30.368 --rc genhtml_function_coverage=1 00:09:30.368 --rc genhtml_legend=1 00:09:30.368 --rc geninfo_all_blocks=1 00:09:30.368 --rc geninfo_unexecuted_blocks=1 00:09:30.368 00:09:30.368 ' 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.368 --rc genhtml_branch_coverage=1 00:09:30.368 --rc genhtml_function_coverage=1 00:09:30.368 --rc genhtml_legend=1 00:09:30.368 --rc geninfo_all_blocks=1 00:09:30.368 --rc geninfo_unexecuted_blocks=1 00:09:30.368 00:09:30.368 ' 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.368 --rc genhtml_branch_coverage=1 00:09:30.368 --rc genhtml_function_coverage=1 00:09:30.368 --rc genhtml_legend=1 00:09:30.368 --rc geninfo_all_blocks=1 00:09:30.368 --rc geninfo_unexecuted_blocks=1 00:09:30.368 00:09:30.368 ' 00:09:30.368 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.368 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:30.368 23:11:02 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:30.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.369 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:30.369 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65772 00:09:30.369 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:30.369 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65772 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65772 ']' 00:09:30.369 23:11:02 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.369 23:11:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.369 [2024-11-25 23:11:02.692366] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:09:30.369 [2024-11-25 23:11:02.692481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65772 ] 00:09:30.628 [2024-11-25 23:11:02.849172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.628 [2024-11-25 23:11:02.949779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.628 [2024-11-25 23:11:02.949871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.195 23:11:03 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.195 23:11:03 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:31.195 23:11:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:31.453 Nvme0n1 00:09:31.453 23:11:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:31.453 23:11:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:31.713 request: 00:09:31.713 { 00:09:31.713 "bdev_name": "Nvme0n1", 00:09:31.713 "filename": "non_existing_file", 00:09:31.713 "method": "bdev_nvme_apply_firmware", 00:09:31.713 "req_id": 1 00:09:31.713 } 00:09:31.713 Got JSON-RPC error response 00:09:31.713 response: 00:09:31.713 { 00:09:31.713 "code": -32603, 00:09:31.713 "message": "open file failed." 00:09:31.713 } 00:09:31.713 23:11:04 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:31.713 23:11:04 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:31.713 23:11:04 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:31.974 23:11:04 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:31.974 23:11:04 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65772 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65772 ']' 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65772 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65772 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.974 killing process with pid 65772 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65772' 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65772 00:09:31.974 23:11:04 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65772 00:09:33.356 ************************************ 00:09:33.356 END TEST nvme_rpc 00:09:33.356 ************************************ 00:09:33.356 00:09:33.356 real 0m3.106s 00:09:33.356 user 0m5.908s 00:09:33.356 sys 0m0.513s 00:09:33.356 23:11:05 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.356 23:11:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.356 23:11:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:33.356 23:11:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.356 23:11:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.356 23:11:05 -- common/autotest_common.sh@10 -- # set +x 00:09:33.356 ************************************ 00:09:33.356 START TEST nvme_rpc_timeouts 00:09:33.356 ************************************ 00:09:33.356 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:33.357 * Looking for test storage... 00:09:33.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.357 23:11:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.357 --rc genhtml_branch_coverage=1 00:09:33.357 --rc genhtml_function_coverage=1 00:09:33.357 --rc genhtml_legend=1 00:09:33.357 --rc geninfo_all_blocks=1 00:09:33.357 --rc geninfo_unexecuted_blocks=1 00:09:33.357 00:09:33.357 ' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.357 --rc genhtml_branch_coverage=1 00:09:33.357 --rc genhtml_function_coverage=1 00:09:33.357 --rc genhtml_legend=1 00:09:33.357 --rc geninfo_all_blocks=1 00:09:33.357 --rc geninfo_unexecuted_blocks=1 00:09:33.357 00:09:33.357 ' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.357 --rc genhtml_branch_coverage=1 00:09:33.357 --rc genhtml_function_coverage=1 00:09:33.357 --rc genhtml_legend=1 00:09:33.357 --rc geninfo_all_blocks=1 00:09:33.357 --rc geninfo_unexecuted_blocks=1 00:09:33.357 00:09:33.357 ' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.357 --rc genhtml_branch_coverage=1 00:09:33.357 --rc genhtml_function_coverage=1 00:09:33.357 --rc genhtml_legend=1 00:09:33.357 --rc geninfo_all_blocks=1 00:09:33.357 --rc geninfo_unexecuted_blocks=1 00:09:33.357 00:09:33.357 ' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65833 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65833 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65865 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65865 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65865 ']' 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.357 23:11:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.357 23:11:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:33.616 [2024-11-25 23:11:05.781533] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:09:33.616 [2024-11-25 23:11:05.781739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65865 ] 00:09:33.616 [2024-11-25 23:11:05.936332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.875 [2024-11-25 23:11:06.020031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.875 [2024-11-25 23:11:06.020051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.443 23:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.443 Checking default timeout settings: 00:09:34.443 23:11:06 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:34.443 23:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:34.443 23:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:34.702 Making settings changes with rpc: 00:09:34.702 23:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:34.702 23:11:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:34.702 Check default vs. modified settings: 00:09:34.702 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:34.702 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.273 Setting action_on_timeout is changed as expected. 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.273 Setting timeout_us is changed as expected. 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:35.273 Setting timeout_admin_us is changed as expected. 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65833 /tmp/settings_modified_65833 00:09:35.273 23:11:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65865 00:09:35.273 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65865 ']' 00:09:35.273 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65865 00:09:35.273 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:35.273 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.274 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65865 00:09:35.274 killing process with pid 65865 00:09:35.274 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.274 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.274 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65865' 00:09:35.274 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65865 00:09:35.274 23:11:07 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65865 00:09:36.655 RPC TIMEOUT SETTING TEST PASSED. 00:09:36.656 23:11:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:36.656 00:09:36.656 real 0m3.047s 00:09:36.656 user 0m5.824s 00:09:36.656 sys 0m0.524s 00:09:36.656 ************************************ 00:09:36.656 END TEST nvme_rpc_timeouts 00:09:36.656 ************************************ 00:09:36.656 23:11:08 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.656 23:11:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:36.656 23:11:08 -- spdk/autotest.sh@239 -- # uname -s 00:09:36.656 23:11:08 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:36.656 23:11:08 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:36.656 23:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.656 23:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.656 23:11:08 -- common/autotest_common.sh@10 -- # set +x 00:09:36.656 ************************************ 00:09:36.656 START TEST sw_hotplug 00:09:36.656 ************************************ 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:36.656 * Looking for test storage... 00:09:36.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.656 23:11:08 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.656 --rc genhtml_branch_coverage=1 00:09:36.656 --rc genhtml_function_coverage=1 00:09:36.656 --rc genhtml_legend=1 00:09:36.656 --rc geninfo_all_blocks=1 00:09:36.656 --rc geninfo_unexecuted_blocks=1 00:09:36.656 00:09:36.656 ' 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.656 --rc genhtml_branch_coverage=1 00:09:36.656 --rc genhtml_function_coverage=1 00:09:36.656 --rc genhtml_legend=1 00:09:36.656 --rc geninfo_all_blocks=1 00:09:36.656 --rc geninfo_unexecuted_blocks=1 00:09:36.656 00:09:36.656 ' 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.656 --rc genhtml_branch_coverage=1 00:09:36.656 --rc genhtml_function_coverage=1 00:09:36.656 --rc genhtml_legend=1 00:09:36.656 --rc geninfo_all_blocks=1 00:09:36.656 --rc geninfo_unexecuted_blocks=1 00:09:36.656 00:09:36.656 ' 00:09:36.656 23:11:08 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.656 --rc genhtml_branch_coverage=1 00:09:36.656 --rc genhtml_function_coverage=1 00:09:36.656 --rc genhtml_legend=1 00:09:36.656 --rc geninfo_all_blocks=1 00:09:36.656 --rc geninfo_unexecuted_blocks=1 00:09:36.656 00:09:36.656 ' 00:09:36.656 23:11:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:36.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.916 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:36.916 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:36.916 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:36.916 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:36.916 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:36.916 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:36.916 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:36.916 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:36.916 23:11:09 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:36.916 23:11:09 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:36.917 23:11:09 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:37.175 23:11:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.175 23:11:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:37.175 23:11:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.175 23:11:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.175 23:11:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:37.176 23:11:09 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:37.176 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:37.176 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:37.176 23:11:09 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.434 Waiting for block devices as requested 00:09:37.694 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.694 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.694 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.694 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.973 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:42.973 23:11:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:42.973 23:11:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.237 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:43.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.237 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:43.518 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:43.811 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.811 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:43.811 23:11:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66719 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:43.811 23:11:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:43.811 23:11:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:43.811 23:11:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:43.811 23:11:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:43.811 23:11:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:43.811 23:11:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:44.069 Initializing NVMe Controllers 00:09:44.069 Attaching to 0000:00:10.0 00:09:44.069 Attaching to 0000:00:11.0 00:09:44.069 Attached to 0000:00:11.0 00:09:44.069 Attached to 0000:00:10.0 00:09:44.069 Initialization complete. Starting I/O... 00:09:44.069 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:44.069 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:44.069 00:09:45.007 QEMU NVMe Ctrl (12341 ): 2530 I/Os completed (+2530) 00:09:45.007 QEMU NVMe Ctrl (12340 ): 2549 I/Os completed (+2549) 00:09:45.007 00:09:46.401 QEMU NVMe Ctrl (12341 ): 5845 I/Os completed (+3315) 00:09:46.401 QEMU NVMe Ctrl (12340 ): 5837 I/Os completed (+3288) 00:09:46.401 00:09:47.340 QEMU NVMe Ctrl (12341 ): 9100 I/Os completed (+3255) 00:09:47.340 QEMU NVMe Ctrl (12340 ): 9078 I/Os completed (+3241) 00:09:47.340 00:09:48.274 QEMU NVMe Ctrl (12341 ): 12790 I/Os completed (+3690) 00:09:48.274 QEMU NVMe Ctrl (12340 ): 12762 I/Os completed (+3684) 00:09:48.274 00:09:49.207 QEMU NVMe Ctrl (12341 ): 16497 I/Os completed (+3707) 00:09:49.207 QEMU NVMe Ctrl (12340 ): 16469 I/Os completed (+3707) 00:09:49.207 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.140 [2024-11-25 23:11:22.157126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:50.140 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:50.140 [2024-11-25 23:11:22.158214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.158256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.158271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.158287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:50.140 [2024-11-25 23:11:22.159690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.159736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.159749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.159761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:50.140 [2024-11-25 23:11:22.181876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:50.140 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:50.140 [2024-11-25 23:11:22.182844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.182945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.182965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.182978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:50.140 [2024-11-25 23:11:22.184414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.184446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.184458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 [2024-11-25 23:11:22.184468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:50.140 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:50.140 EAL: Scan for (pci) bus failed. 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:50.140 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:50.140 Attaching to 0000:00:10.0 00:09:50.140 Attached to 0000:00:10.0 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:50.140 23:11:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:50.140 Attaching to 0000:00:11.0 00:09:50.140 Attached to 0000:00:11.0 00:09:51.074 QEMU NVMe Ctrl (12340 ): 3641 I/Os completed (+3641) 00:09:51.074 QEMU NVMe Ctrl (12341 ): 3321 I/Os completed (+3321) 00:09:51.074 00:09:52.008 QEMU NVMe Ctrl (12340 ): 7378 I/Os completed (+3737) 00:09:52.008 QEMU NVMe Ctrl (12341 ): 7055 I/Os completed (+3734) 00:09:52.008 00:09:53.383 QEMU NVMe Ctrl (12340 ): 11113 I/Os completed (+3735) 00:09:53.383 QEMU NVMe Ctrl (12341 ): 10787 I/Os completed (+3732) 00:09:53.383 00:09:54.328 QEMU NVMe Ctrl (12340 ): 14434 I/Os completed (+3321) 00:09:54.328 QEMU NVMe Ctrl (12341 ): 14111 I/Os completed (+3324) 00:09:54.328 00:09:55.274 QEMU NVMe Ctrl (12340 ): 17151 I/Os completed (+2717) 00:09:55.274 QEMU NVMe Ctrl (12341 ): 16825 I/Os completed (+2714) 00:09:55.274 00:09:56.209 QEMU NVMe Ctrl (12340 ): 20041 I/Os completed (+2890) 00:09:56.209 QEMU NVMe Ctrl (12341 ): 19724 I/Os completed (+2899) 00:09:56.209 00:09:57.139 QEMU NVMe Ctrl (12340 ): 23351 I/Os completed (+3310) 00:09:57.139 QEMU NVMe Ctrl (12341 ): 22855 I/Os completed (+3131) 00:09:57.139 00:09:58.076 QEMU NVMe Ctrl (12340 ): 26780 I/Os completed (+3429) 00:09:58.076 QEMU NVMe Ctrl (12341 ): 26249 I/Os completed (+3394) 00:09:58.076 00:09:59.020 QEMU NVMe Ctrl (12340 ): 29902 I/Os completed (+3122) 00:09:59.020 QEMU NVMe Ctrl (12341 ): 29355 I/Os completed (+3106) 00:09:59.020 00:10:00.403 QEMU NVMe Ctrl (12340 ): 33638 I/Os completed (+3736) 00:10:00.403 QEMU NVMe Ctrl (12341 ): 33131 I/Os completed (+3776) 00:10:00.403 00:10:01.357 QEMU NVMe Ctrl (12340 ): 37526 I/Os completed (+3888) 00:10:01.357 QEMU NVMe Ctrl (12341 ): 37019 I/Os completed (+3888) 00:10:01.357 00:10:02.302 QEMU NVMe Ctrl (12340 ): 41395 I/Os completed (+3869) 00:10:02.302 QEMU NVMe Ctrl (12341 ): 40873 I/Os completed (+3854) 00:10:02.302 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.302 [2024-11-25 23:11:34.455457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:02.302 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:02.302 [2024-11-25 23:11:34.456576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.456641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.456669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.456699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:02.302 [2024-11-25 23:11:34.458397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.458491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.458548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.458575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:10:02.302 EAL: Scan for (pci) bus failed. 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.302 [2024-11-25 23:11:34.476739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:02.302 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:02.302 [2024-11-25 23:11:34.477753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.477847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.477880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.477904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:02.302 [2024-11-25 23:11:34.479347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.479431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.479459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 [2024-11-25 23:11:34.479525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.302 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:02.302 Attaching to 0000:00:10.0 00:10:02.564 Attached to 0000:00:10.0 00:10:02.564 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:02.564 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.564 23:11:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:02.564 Attaching to 0000:00:11.0 00:10:02.565 Attached to 0000:00:11.0 00:10:03.135 QEMU NVMe Ctrl (12340 ): 2632 I/Os completed (+2632) 00:10:03.135 QEMU NVMe Ctrl (12341 ): 2240 I/Os completed (+2240) 00:10:03.135 00:10:04.098 QEMU NVMe Ctrl (12340 ): 6379 I/Os completed (+3747) 00:10:04.098 QEMU NVMe Ctrl (12341 ): 5953 I/Os completed (+3713) 00:10:04.098 00:10:05.033 QEMU NVMe Ctrl (12340 ): 10041 I/Os completed (+3662) 00:10:05.033 QEMU NVMe Ctrl (12341 ): 9496 I/Os completed (+3543) 00:10:05.033 00:10:06.405 QEMU NVMe Ctrl (12340 ): 13691 I/Os completed (+3650) 00:10:06.405 QEMU NVMe Ctrl (12341 ): 13057 I/Os completed (+3561) 00:10:06.405 00:10:07.337 QEMU NVMe Ctrl (12340 ): 17449 I/Os completed (+3758) 00:10:07.337 QEMU NVMe Ctrl (12341 ): 16630 I/Os completed (+3573) 00:10:07.337 00:10:08.271 QEMU NVMe Ctrl (12340 ): 21500 I/Os completed (+4051) 00:10:08.271 QEMU NVMe Ctrl (12341 ): 20569 I/Os completed (+3939) 00:10:08.271 00:10:09.204 QEMU NVMe Ctrl (12340 ): 25139 I/Os completed (+3639) 00:10:09.204 QEMU NVMe Ctrl (12341 ): 24087 I/Os completed (+3518) 00:10:09.204 00:10:10.139 QEMU NVMe Ctrl (12340 ): 28750 I/Os completed (+3611) 00:10:10.139 QEMU NVMe Ctrl (12341 ): 27671 I/Os completed (+3584) 00:10:10.139 00:10:11.071 QEMU NVMe Ctrl (12340 ): 32540 I/Os completed (+3790) 00:10:11.071 QEMU NVMe Ctrl (12341 ): 31247 I/Os completed (+3576) 00:10:11.071 00:10:12.012 QEMU NVMe Ctrl (12340 ): 36205 I/Os completed (+3665) 00:10:12.012 QEMU NVMe Ctrl (12341 ): 34815 I/Os completed (+3568) 00:10:12.012 00:10:13.420 QEMU NVMe Ctrl (12340 ): 39915 I/Os completed (+3710) 00:10:13.420 QEMU NVMe Ctrl (12341 ): 38393 I/Os completed (+3578) 00:10:13.420 00:10:14.056 QEMU NVMe Ctrl (12340 ): 44541 I/Os completed (+4626) 00:10:14.056 QEMU NVMe Ctrl (12341 ): 42900 I/Os completed (+4507) 00:10:14.056 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.622 [2024-11-25 23:11:46.773087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:14.622 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:14.622 [2024-11-25 23:11:46.774530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.774670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.774694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.774714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:14.622 [2024-11-25 23:11:46.777083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.777204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.777241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.777382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.622 [2024-11-25 23:11:46.795964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:14.622 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:14.622 [2024-11-25 23:11:46.797380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.797428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.797449] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.797465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:14.622 [2024-11-25 23:11:46.799267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.799305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.799323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 [2024-11-25 23:11:46.799336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:14.622 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:14.622 EAL: Scan for (pci) bus failed. 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:14.622 23:11:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:14.622 Attaching to 0000:00:10.0 00:10:14.622 Attached to 0000:00:10.0 00:10:14.880 23:11:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:14.880 23:11:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:14.880 23:11:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:14.880 Attaching to 0000:00:11.0 00:10:14.880 Attached to 0000:00:11.0 00:10:14.880 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:14.880 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:14.880 [2024-11-25 23:11:47.045387] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:27.089 23:11:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.089 23:11:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.089 23:11:59 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.89 00:10:27.089 23:11:59 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.89 00:10:27.089 23:11:59 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:27.089 23:11:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.89 00:10:27.089 23:11:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.89 2 00:10:27.089 remove_attach_helper took 42.89s to complete (handling 2 nvme drive(s)) 23:11:59 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66719 00:10:33.653 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66719) - No such process 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66719 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67267 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:33.653 23:12:05 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67267 00:10:33.653 23:12:05 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67267 ']' 00:10:33.653 23:12:05 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.653 23:12:05 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.653 23:12:05 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.653 23:12:05 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.653 23:12:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.653 [2024-11-25 23:12:05.132035] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:10:33.653 [2024-11-25 23:12:05.132165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67267 ] 00:10:33.653 [2024-11-25 23:12:05.292595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.653 [2024-11-25 23:12:05.400307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:33.914 23:12:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:33.914 23:12:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.533 23:12:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.533 23:12:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.533 23:12:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:40.533 [2024-11-25 23:12:12.136082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:40.533 [2024-11-25 23:12:12.137563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.137603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.137618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 [2024-11-25 23:12:12.137640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.137648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.137656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 [2024-11-25 23:12:12.137664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.137673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 [2024-11-25 23:12:12.137691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.137698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.137707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.533 23:12:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.533 23:12:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.533 [2024-11-25 23:12:12.636053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:40.533 [2024-11-25 23:12:12.637335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.637366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.637379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 [2024-11-25 23:12:12.637398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.637407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.637414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 [2024-11-25 23:12:12.637423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.637430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.637439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 [2024-11-25 23:12:12.637446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.533 [2024-11-25 23:12:12.637454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:40.533 [2024-11-25 23:12:12.637461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:40.533 23:12:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:40.533 23:12:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.105 23:12:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.105 23:12:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.105 23:12:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.105 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.366 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:41.366 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.366 23:12:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.604 23:12:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.604 23:12:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.604 23:12:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.604 23:12:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.604 23:12:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.604 23:12:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:53.604 23:12:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.604 [2024-11-25 23:12:25.636264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:53.604 [2024-11-25 23:12:25.637516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.604 [2024-11-25 23:12:25.637553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.604 [2024-11-25 23:12:25.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.604 [2024-11-25 23:12:25.637583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.604 [2024-11-25 23:12:25.637591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.604 [2024-11-25 23:12:25.637599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.604 [2024-11-25 23:12:25.637606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.605 [2024-11-25 23:12:25.637614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.605 [2024-11-25 23:12:25.637620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.605 [2024-11-25 23:12:25.637628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.605 [2024-11-25 23:12:25.637635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.605 [2024-11-25 23:12:25.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.863 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:53.863 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.863 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.863 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.863 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.864 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.864 23:12:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.864 23:12:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.864 23:12:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.864 [2024-11-25 23:12:26.136257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:53.864 [2024-11-25 23:12:26.137471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.864 [2024-11-25 23:12:26.137503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.864 [2024-11-25 23:12:26.137516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.864 [2024-11-25 23:12:26.137532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.864 [2024-11-25 23:12:26.137541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.864 [2024-11-25 23:12:26.137548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.864 [2024-11-25 23:12:26.137557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.864 [2024-11-25 23:12:26.137564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.864 [2024-11-25 23:12:26.137572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.864 [2024-11-25 23:12:26.137579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.864 [2024-11-25 23:12:26.137587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.864 [2024-11-25 23:12:26.137593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.864 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:53.864 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.430 23:12:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.430 23:12:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.430 23:12:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.430 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.689 23:12:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.895 23:12:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.895 23:12:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.895 23:12:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.895 23:12:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.895 23:12:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.895 23:12:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.895 23:12:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.895 [2024-11-25 23:12:39.036494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.895 [2024-11-25 23:12:39.037942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.895 [2024-11-25 23:12:39.037982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.895 [2024-11-25 23:12:39.037995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.895 [2024-11-25 23:12:39.038018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.895 [2024-11-25 23:12:39.038026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.895 [2024-11-25 23:12:39.038037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.895 [2024-11-25 23:12:39.038045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.895 [2024-11-25 23:12:39.038054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.895 [2024-11-25 23:12:39.038071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.895 [2024-11-25 23:12:39.038080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.895 [2024-11-25 23:12:39.038087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.895 [2024-11-25 23:12:39.038095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.895 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.155 [2024-11-25 23:12:39.436488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:07.155 [2024-11-25 23:12:39.437740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.155 [2024-11-25 23:12:39.437773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.155 [2024-11-25 23:12:39.437786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.155 [2024-11-25 23:12:39.437799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.155 [2024-11-25 23:12:39.437808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.155 [2024-11-25 23:12:39.437816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.155 [2024-11-25 23:12:39.437825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.155 [2024-11-25 23:12:39.437832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.155 [2024-11-25 23:12:39.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.155 [2024-11-25 23:12:39.437850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.155 [2024-11-25 23:12:39.437858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.155 [2024-11-25 23:12:39.437865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.416 23:12:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.416 23:12:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.416 23:12:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.416 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.675 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.675 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.675 23:12:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.85 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.85 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.85 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.85 2 00:11:19.909 remove_attach_helper took 45.85s to complete (handling 2 nvme drive(s)) 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:19.909 23:12:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:19.909 23:12:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.496 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.497 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.497 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.497 23:12:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.497 23:12:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.497 23:12:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.497 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.497 23:12:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.497 [2024-11-25 23:12:58.012754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.497 [2024-11-25 23:12:58.014408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.014554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.014573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.014595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.014605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.014616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.014625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.014635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.014644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.014654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.014663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.014675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.412762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.497 [2024-11-25 23:12:58.413963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.414003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.414020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.414038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.414050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.414075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.414087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.414096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.414106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 [2024-11-25 23:12:58.414115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.497 [2024-11-25 23:12:58.414125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.497 [2024-11-25 23:12:58.414134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.497 23:12:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.497 23:12:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.497 23:12:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.497 23:12:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.704 23:13:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.704 23:13:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.704 23:13:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.704 23:13:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.704 23:13:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.704 [2024-11-25 23:13:10.912986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.704 [2024-11-25 23:13:10.915593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.704 [2024-11-25 23:13:10.915627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.704 [2024-11-25 23:13:10.915639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.704 [2024-11-25 23:13:10.915656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.704 [2024-11-25 23:13:10.915663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.704 [2024-11-25 23:13:10.915672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.704 [2024-11-25 23:13:10.915679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.704 [2024-11-25 23:13:10.915687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.704 [2024-11-25 23:13:10.915693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.704 [2024-11-25 23:13:10.915701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.704 [2024-11-25 23:13:10.915708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.704 [2024-11-25 23:13:10.915716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.704 23:13:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.704 23:13:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.962 [2024-11-25 23:13:11.312982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.962 [2024-11-25 23:13:11.313962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-25 23:13:11.313991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-25 23:13:11.314003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.962 [2024-11-25 23:13:11.314016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-25 23:13:11.314026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-25 23:13:11.314034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.962 [2024-11-25 23:13:11.314042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-25 23:13:11.314049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-25 23:13:11.314065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.962 [2024-11-25 23:13:11.314073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.962 [2024-11-25 23:13:11.314080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.962 [2024-11-25 23:13:11.314087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.220 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:39.220 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.221 23:13:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.221 23:13:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.221 23:13:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.221 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.478 23:13:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.711 23:13:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.711 23:13:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.711 23:13:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.711 23:13:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.711 23:13:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.711 23:13:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:51.711 23:13:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.711 [2024-11-25 23:13:23.813199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.711 [2024-11-25 23:13:23.814205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.711 [2024-11-25 23:13:23.814302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.711 [2024-11-25 23:13:23.814371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.711 [2024-11-25 23:13:23.814435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.711 [2024-11-25 23:13:23.814499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.711 [2024-11-25 23:13:23.814549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.711 [2024-11-25 23:13:23.814577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.711 [2024-11-25 23:13:23.814629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.711 [2024-11-25 23:13:23.814674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.711 [2024-11-25 23:13:23.814699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.711 [2024-11-25 23:13:23.814715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.711 [2024-11-25 23:13:23.814781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.019 [2024-11-25 23:13:24.213204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.020 [2024-11-25 23:13:24.214144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.020 [2024-11-25 23:13:24.214240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.020 [2024-11-25 23:13:24.214299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.020 [2024-11-25 23:13:24.214350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.020 [2024-11-25 23:13:24.214371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.020 [2024-11-25 23:13:24.214394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.020 [2024-11-25 23:13:24.214451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.020 [2024-11-25 23:13:24.214470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.020 [2024-11-25 23:13:24.214496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.020 [2024-11-25 23:13:24.214520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.020 [2024-11-25 23:13:24.214561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.020 [2024-11-25 23:13:24.214588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.020 23:13:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.020 23:13:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.020 23:13:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.020 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.278 23:13:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.70 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.70 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.70 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.70 2 00:12:04.475 remove_attach_helper took 44.70s to complete (handling 2 nvme drive(s)) 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:04.475 23:13:36 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67267 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67267 ']' 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67267 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67267 00:12:04.475 killing process with pid 67267 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67267' 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67267 00:12:04.475 23:13:36 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67267 00:12:05.910 23:13:37 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:05.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.517 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.517 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:06.517 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.517 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.517 00:12:06.517 real 2m30.108s 00:12:06.517 user 1m52.372s 00:12:06.517 sys 0m16.427s 00:12:06.517 23:13:38 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.517 23:13:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 ************************************ 00:12:06.517 END TEST sw_hotplug 00:12:06.517 ************************************ 00:12:06.517 23:13:38 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:06.517 23:13:38 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:06.517 23:13:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.517 23:13:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.517 23:13:38 -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 ************************************ 00:12:06.517 START TEST nvme_xnvme 00:12:06.517 ************************************ 00:12:06.517 23:13:38 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:06.779 * Looking for test storage... 00:12:06.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.779 23:13:38 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.779 23:13:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.779 23:13:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.779 23:13:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.779 23:13:38 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.780 23:13:38 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:06.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.780 --rc genhtml_branch_coverage=1 00:12:06.780 --rc genhtml_function_coverage=1 00:12:06.780 --rc genhtml_legend=1 00:12:06.780 --rc geninfo_all_blocks=1 00:12:06.780 --rc geninfo_unexecuted_blocks=1 00:12:06.780 00:12:06.780 ' 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:06.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.780 --rc genhtml_branch_coverage=1 00:12:06.780 --rc genhtml_function_coverage=1 00:12:06.780 --rc genhtml_legend=1 00:12:06.780 --rc geninfo_all_blocks=1 00:12:06.780 --rc geninfo_unexecuted_blocks=1 00:12:06.780 00:12:06.780 ' 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:06.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.780 --rc genhtml_branch_coverage=1 00:12:06.780 --rc genhtml_function_coverage=1 00:12:06.780 --rc genhtml_legend=1 00:12:06.780 --rc geninfo_all_blocks=1 00:12:06.780 --rc geninfo_unexecuted_blocks=1 00:12:06.780 00:12:06.780 ' 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:06.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.780 --rc genhtml_branch_coverage=1 00:12:06.780 --rc genhtml_function_coverage=1 00:12:06.780 --rc genhtml_legend=1 00:12:06.780 --rc geninfo_all_blocks=1 00:12:06.780 --rc geninfo_unexecuted_blocks=1 00:12:06.780 00:12:06.780 ' 00:12:06.780 23:13:38 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:06.780 23:13:38 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:06.780 23:13:38 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:06.780 23:13:38 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:06.780 23:13:39 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:06.780 23:13:39 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:06.780 23:13:39 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:06.780 #define SPDK_CONFIG_H 00:12:06.780 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:06.780 #define SPDK_CONFIG_APPS 1 00:12:06.780 #define SPDK_CONFIG_ARCH native 00:12:06.781 #define SPDK_CONFIG_ASAN 1 00:12:06.781 #undef SPDK_CONFIG_AVAHI 00:12:06.781 #undef SPDK_CONFIG_CET 00:12:06.781 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:06.781 #define SPDK_CONFIG_COVERAGE 1 00:12:06.781 #define SPDK_CONFIG_CROSS_PREFIX 00:12:06.781 #undef SPDK_CONFIG_CRYPTO 00:12:06.781 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:06.781 #undef SPDK_CONFIG_CUSTOMOCF 00:12:06.781 #undef SPDK_CONFIG_DAOS 00:12:06.781 #define SPDK_CONFIG_DAOS_DIR 00:12:06.781 #define SPDK_CONFIG_DEBUG 1 00:12:06.781 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:06.781 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:06.781 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:06.781 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:06.781 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:06.781 #undef SPDK_CONFIG_DPDK_UADK 00:12:06.781 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:06.781 #define SPDK_CONFIG_EXAMPLES 1 00:12:06.781 #undef SPDK_CONFIG_FC 00:12:06.781 #define SPDK_CONFIG_FC_PATH 00:12:06.781 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:06.781 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:06.781 #define SPDK_CONFIG_FSDEV 1 00:12:06.781 #undef SPDK_CONFIG_FUSE 00:12:06.781 #undef SPDK_CONFIG_FUZZER 00:12:06.781 #define SPDK_CONFIG_FUZZER_LIB 00:12:06.781 #undef SPDK_CONFIG_GOLANG 00:12:06.781 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:06.781 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:06.781 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:06.781 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:06.781 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:06.781 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:06.781 #undef SPDK_CONFIG_HAVE_LZ4 00:12:06.781 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:06.781 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:06.781 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:06.781 #define SPDK_CONFIG_IDXD 1 00:12:06.781 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:06.781 #undef SPDK_CONFIG_IPSEC_MB 00:12:06.781 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:06.781 #define SPDK_CONFIG_ISAL 1 00:12:06.781 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:06.781 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:06.781 #define SPDK_CONFIG_LIBDIR 00:12:06.781 #undef SPDK_CONFIG_LTO 00:12:06.781 #define SPDK_CONFIG_MAX_LCORES 128 00:12:06.781 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:06.781 #define SPDK_CONFIG_NVME_CUSE 1 00:12:06.781 #undef SPDK_CONFIG_OCF 00:12:06.781 #define SPDK_CONFIG_OCF_PATH 00:12:06.781 #define SPDK_CONFIG_OPENSSL_PATH 00:12:06.781 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:06.781 #define SPDK_CONFIG_PGO_DIR 00:12:06.781 #undef SPDK_CONFIG_PGO_USE 00:12:06.781 #define SPDK_CONFIG_PREFIX /usr/local 00:12:06.781 #undef SPDK_CONFIG_RAID5F 00:12:06.781 #undef SPDK_CONFIG_RBD 00:12:06.781 #define SPDK_CONFIG_RDMA 1 00:12:06.781 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:06.781 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:06.781 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:06.781 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:06.781 #define SPDK_CONFIG_SHARED 1 00:12:06.781 #undef SPDK_CONFIG_SMA 00:12:06.781 #define SPDK_CONFIG_TESTS 1 00:12:06.781 #undef SPDK_CONFIG_TSAN 00:12:06.781 #define SPDK_CONFIG_UBLK 1 00:12:06.781 #define SPDK_CONFIG_UBSAN 1 00:12:06.781 #undef SPDK_CONFIG_UNIT_TESTS 00:12:06.781 #undef SPDK_CONFIG_URING 00:12:06.781 #define SPDK_CONFIG_URING_PATH 00:12:06.781 #undef SPDK_CONFIG_URING_ZNS 00:12:06.781 #undef SPDK_CONFIG_USDT 00:12:06.781 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:06.781 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:06.781 #undef SPDK_CONFIG_VFIO_USER 00:12:06.781 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:06.781 #define SPDK_CONFIG_VHOST 1 00:12:06.781 #define SPDK_CONFIG_VIRTIO 1 00:12:06.781 #undef SPDK_CONFIG_VTUNE 00:12:06.781 #define SPDK_CONFIG_VTUNE_DIR 00:12:06.781 #define SPDK_CONFIG_WERROR 1 00:12:06.781 #define SPDK_CONFIG_WPDK_DIR 00:12:06.781 #define SPDK_CONFIG_XNVME 1 00:12:06.781 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:06.781 23:13:39 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.781 23:13:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.781 23:13:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.781 23:13:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.781 23:13:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.781 23:13:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.781 23:13:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.781 23:13:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.781 23:13:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:06.781 23:13:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:06.781 23:13:39 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:06.781 23:13:39 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:06.782 23:13:39 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68635 ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68635 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mOaMUc 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.mOaMUc/tests/xnvme /tmp/spdk.mOaMUc 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13980864512 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5587054592 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13980864512 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5587054592 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98086719488 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1616060416 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:06.783 * Looking for test storage... 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13980864512 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:06.783 23:13:39 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.784 23:13:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:07.043 23:13:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.043 23:13:39 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:07.043 23:13:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.043 23:13:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.044 --rc genhtml_branch_coverage=1 00:12:07.044 --rc genhtml_function_coverage=1 00:12:07.044 --rc genhtml_legend=1 00:12:07.044 --rc geninfo_all_blocks=1 00:12:07.044 --rc geninfo_unexecuted_blocks=1 00:12:07.044 00:12:07.044 ' 00:12:07.044 23:13:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.044 --rc genhtml_branch_coverage=1 00:12:07.044 --rc genhtml_function_coverage=1 00:12:07.044 --rc genhtml_legend=1 00:12:07.044 --rc geninfo_all_blocks=1 00:12:07.044 --rc geninfo_unexecuted_blocks=1 00:12:07.044 00:12:07.044 ' 00:12:07.044 23:13:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.044 --rc genhtml_branch_coverage=1 00:12:07.044 --rc genhtml_function_coverage=1 00:12:07.044 --rc genhtml_legend=1 00:12:07.044 --rc geninfo_all_blocks=1 00:12:07.044 --rc geninfo_unexecuted_blocks=1 00:12:07.044 00:12:07.044 ' 00:12:07.044 23:13:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.044 --rc genhtml_branch_coverage=1 00:12:07.044 --rc genhtml_function_coverage=1 00:12:07.044 --rc genhtml_legend=1 00:12:07.044 --rc geninfo_all_blocks=1 00:12:07.044 --rc geninfo_unexecuted_blocks=1 00:12:07.044 00:12:07.044 ' 00:12:07.044 23:13:39 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.044 23:13:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.044 23:13:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.044 23:13:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.044 23:13:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.044 23:13:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.044 23:13:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.044 23:13:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.044 23:13:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:07.044 23:13:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:07.044 23:13:39 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:07.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.305 Waiting for block devices as requested 00:12:07.305 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.566 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.566 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.566 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.863 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:12.863 23:13:44 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:13.123 23:13:45 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:13.124 23:13:45 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:13.124 23:13:45 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:13.124 23:13:45 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:13.124 23:13:45 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:13.124 23:13:45 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:13.124 23:13:45 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:13.385 No valid GPT data, bailing 00:12:13.385 23:13:45 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:13.385 23:13:45 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:13.385 23:13:45 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:13.385 23:13:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:13.385 23:13:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:13.385 23:13:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.385 23:13:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.385 ************************************ 00:12:13.385 START TEST xnvme_rpc 00:12:13.385 ************************************ 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69026 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69026 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69026 ']' 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.385 23:13:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.385 [2024-11-25 23:13:45.628542] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:12:13.385 [2024-11-25 23:13:45.628810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69026 ] 00:12:13.646 [2024-11-25 23:13:45.789597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.646 [2024-11-25 23:13:45.887439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.219 xnvme_bdev 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:14.219 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69026 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69026 ']' 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69026 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69026 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.479 killing process with pid 69026 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69026' 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69026 00:12:14.479 23:13:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69026 00:12:15.861 00:12:15.861 real 0m2.598s 00:12:15.861 user 0m2.694s 00:12:15.861 sys 0m0.342s 00:12:15.861 23:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.861 23:13:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.861 ************************************ 00:12:15.861 END TEST xnvme_rpc 00:12:15.861 ************************************ 00:12:15.861 23:13:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:15.861 23:13:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.861 23:13:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.861 23:13:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.861 ************************************ 00:12:15.861 START TEST xnvme_bdevperf 00:12:15.861 ************************************ 00:12:15.861 23:13:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:15.861 23:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:15.861 23:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:15.861 23:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:15.861 23:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:15.861 23:13:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:15.862 23:13:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:15.862 23:13:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:16.122 { 00:12:16.122 "subsystems": [ 00:12:16.122 { 00:12:16.122 "subsystem": "bdev", 00:12:16.122 "config": [ 00:12:16.122 { 00:12:16.122 "params": { 00:12:16.122 "io_mechanism": "libaio", 00:12:16.122 "conserve_cpu": false, 00:12:16.122 "filename": "/dev/nvme0n1", 00:12:16.122 "name": "xnvme_bdev" 00:12:16.122 }, 00:12:16.122 "method": "bdev_xnvme_create" 00:12:16.122 }, 00:12:16.122 { 00:12:16.122 "method": "bdev_wait_for_examine" 00:12:16.122 } 00:12:16.122 ] 00:12:16.122 } 00:12:16.122 ] 00:12:16.122 } 00:12:16.122 [2024-11-25 23:13:48.275678] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:12:16.122 [2024-11-25 23:13:48.275793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69093 ] 00:12:16.122 [2024-11-25 23:13:48.436230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.383 [2024-11-25 23:13:48.531129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.644 Running I/O for 5 seconds... 00:12:18.521 34309.00 IOPS, 134.02 MiB/s [2024-11-25T23:13:51.850Z] 35559.50 IOPS, 138.90 MiB/s [2024-11-25T23:13:53.237Z] 34737.33 IOPS, 135.69 MiB/s [2024-11-25T23:13:53.808Z] 34682.50 IOPS, 135.48 MiB/s [2024-11-25T23:13:53.809Z] 35129.80 IOPS, 137.23 MiB/s 00:12:21.440 Latency(us) 00:12:21.440 [2024-11-25T23:13:53.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.440 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:21.440 xnvme_bdev : 5.00 35101.16 137.11 0.00 0.00 1818.15 356.04 5545.35 00:12:21.440 [2024-11-25T23:13:53.809Z] =================================================================================================================== 00:12:21.440 [2024-11-25T23:13:53.809Z] Total : 35101.16 137.11 0.00 0.00 1818.15 356.04 5545.35 00:12:22.383 23:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:22.383 23:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:22.383 23:13:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:22.383 23:13:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:22.383 23:13:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:22.383 { 00:12:22.383 "subsystems": [ 00:12:22.383 { 00:12:22.383 "subsystem": "bdev", 00:12:22.383 "config": [ 00:12:22.383 { 00:12:22.383 "params": { 00:12:22.383 "io_mechanism": "libaio", 00:12:22.383 "conserve_cpu": false, 00:12:22.383 "filename": "/dev/nvme0n1", 00:12:22.383 "name": "xnvme_bdev" 00:12:22.383 }, 00:12:22.383 "method": "bdev_xnvme_create" 00:12:22.383 }, 00:12:22.384 { 00:12:22.384 "method": "bdev_wait_for_examine" 00:12:22.384 } 00:12:22.384 ] 00:12:22.384 } 00:12:22.384 ] 00:12:22.384 } 00:12:22.384 [2024-11-25 23:13:54.585156] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:12:22.384 [2024-11-25 23:13:54.585395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69165 ] 00:12:22.384 [2024-11-25 23:13:54.739226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.643 [2024-11-25 23:13:54.841771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.903 Running I/O for 5 seconds... 00:12:24.853 17974.00 IOPS, 70.21 MiB/s [2024-11-25T23:13:58.167Z] 10410.00 IOPS, 40.66 MiB/s [2024-11-25T23:13:59.550Z] 7882.00 IOPS, 30.79 MiB/s [2024-11-25T23:14:00.121Z] 13192.50 IOPS, 51.53 MiB/s [2024-11-25T23:14:00.121Z] 17670.80 IOPS, 69.03 MiB/s 00:12:27.752 Latency(us) 00:12:27.752 [2024-11-25T23:14:00.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.752 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:27.752 xnvme_bdev : 5.00 17668.06 69.02 0.00 0.00 3617.43 72.86 36498.51 00:12:27.752 [2024-11-25T23:14:00.121Z] =================================================================================================================== 00:12:27.752 [2024-11-25T23:14:00.121Z] Total : 17668.06 69.02 0.00 0.00 3617.43 72.86 36498.51 00:12:28.696 ************************************ 00:12:28.696 END TEST xnvme_bdevperf 00:12:28.696 ************************************ 00:12:28.696 00:12:28.696 real 0m12.681s 00:12:28.696 user 0m6.692s 00:12:28.696 sys 0m4.533s 00:12:28.696 23:14:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.696 23:14:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:28.696 23:14:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:28.696 23:14:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.696 23:14:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.696 23:14:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.696 ************************************ 00:12:28.696 START TEST xnvme_fio_plugin 00:12:28.696 ************************************ 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:28.696 23:14:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:28.696 { 00:12:28.696 "subsystems": [ 00:12:28.696 { 00:12:28.696 "subsystem": "bdev", 00:12:28.696 "config": [ 00:12:28.696 { 00:12:28.696 "params": { 00:12:28.696 "io_mechanism": "libaio", 00:12:28.696 "conserve_cpu": false, 00:12:28.696 "filename": "/dev/nvme0n1", 00:12:28.696 "name": "xnvme_bdev" 00:12:28.696 }, 00:12:28.696 "method": "bdev_xnvme_create" 00:12:28.696 }, 00:12:28.696 { 00:12:28.696 "method": "bdev_wait_for_examine" 00:12:28.696 } 00:12:28.696 ] 00:12:28.696 } 00:12:28.696 ] 00:12:28.696 } 00:12:28.955 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:28.955 fio-3.35 00:12:28.955 Starting 1 thread 00:12:35.541 00:12:35.541 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69285: Mon Nov 25 23:14:06 2024 00:12:35.541 read: IOPS=36.8k, BW=144MiB/s (151MB/s)(718MiB/5001msec) 00:12:35.541 slat (usec): min=4, max=1594, avg=19.35, stdev=76.77 00:12:35.541 clat (usec): min=106, max=4725, avg=1206.72, stdev=523.34 00:12:35.541 lat (usec): min=179, max=4764, avg=1226.07, stdev=518.08 00:12:35.541 clat percentiles (usec): 00:12:35.541 | 1.00th=[ 262], 5.00th=[ 429], 10.00th=[ 562], 20.00th=[ 758], 00:12:35.541 | 30.00th=[ 914], 40.00th=[ 1045], 50.00th=[ 1172], 60.00th=[ 1303], 00:12:35.541 | 70.00th=[ 1434], 80.00th=[ 1598], 90.00th=[ 1844], 95.00th=[ 2114], 00:12:35.541 | 99.00th=[ 2802], 99.50th=[ 3064], 99.90th=[ 3589], 99.95th=[ 3720], 00:12:35.541 | 99.99th=[ 4228] 00:12:35.541 bw ( KiB/s): min=134160, max=154216, per=99.60%, avg=146494.22, stdev=7799.38, samples=9 00:12:35.541 iops : min=33540, max=38554, avg=36623.56, stdev=1949.85, samples=9 00:12:35.541 lat (usec) : 250=0.84%, 500=6.68%, 750=11.88%, 1000=17.04% 00:12:35.541 lat (msec) : 2=56.98%, 4=6.56%, 10=0.02% 00:12:35.541 cpu : usr=41.06%, sys=48.60%, ctx=12, majf=0, minf=764 00:12:35.541 IO depths : 1=0.5%, 2=1.2%, 4=3.3%, 8=9.0%, 16=23.8%, 32=60.2%, >=64=2.0% 00:12:35.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.541 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:35.541 issued rwts: total=183890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:35.541 00:12:35.541 Run status group 0 (all jobs): 00:12:35.541 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=718MiB (753MB), run=5001-5001msec 00:12:35.541 ----------------------------------------------------- 00:12:35.541 Suppressions used: 00:12:35.541 count bytes template 00:12:35.541 1 11 /usr/src/fio/parse.c 00:12:35.541 1 8 libtcmalloc_minimal.so 00:12:35.541 1 904 libcrypto.so 00:12:35.541 ----------------------------------------------------- 00:12:35.541 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:35.541 23:14:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:35.541 { 00:12:35.541 "subsystems": [ 00:12:35.541 { 00:12:35.541 "subsystem": "bdev", 00:12:35.541 "config": [ 00:12:35.541 { 00:12:35.541 "params": { 00:12:35.541 "io_mechanism": "libaio", 00:12:35.541 "conserve_cpu": false, 00:12:35.541 "filename": "/dev/nvme0n1", 00:12:35.541 "name": "xnvme_bdev" 00:12:35.541 }, 00:12:35.541 "method": "bdev_xnvme_create" 00:12:35.541 }, 00:12:35.541 { 00:12:35.541 "method": "bdev_wait_for_examine" 00:12:35.541 } 00:12:35.541 ] 00:12:35.541 } 00:12:35.541 ] 00:12:35.541 } 00:12:35.803 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:35.803 fio-3.35 00:12:35.803 Starting 1 thread 00:12:42.394 00:12:42.394 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69377: Mon Nov 25 23:14:13 2024 00:12:42.394 write: IOPS=30.3k, BW=118MiB/s (124MB/s)(592MiB/5008msec); 0 zone resets 00:12:42.394 slat (usec): min=4, max=1621, avg=17.31, stdev=63.97 00:12:42.394 clat (usec): min=8, max=343267, avg=1734.70, stdev=7470.20 00:12:42.394 lat (usec): min=65, max=343280, avg=1752.01, stdev=7470.05 00:12:42.394 clat percentiles (usec): 00:12:42.394 | 1.00th=[ 188], 5.00th=[ 338], 10.00th=[ 445], 20.00th=[ 619], 00:12:42.394 | 30.00th=[ 750], 40.00th=[ 865], 50.00th=[ 979], 60.00th=[ 1123], 00:12:42.394 | 70.00th=[ 1287], 80.00th=[ 1565], 90.00th=[ 2802], 95.00th=[ 6521], 00:12:42.394 | 99.00th=[ 9634], 99.50th=[ 10421], 99.90th=[ 55837], 99.95th=[143655], 00:12:42.394 | 99.99th=[341836] 00:12:42.394 bw ( KiB/s): min=32584, max=155536, per=100.00%, avg=121156.40, stdev=35747.51, samples=10 00:12:42.394 iops : min= 8146, max=38884, avg=30289.10, stdev=8936.88, samples=10 00:12:42.394 lat (usec) : 10=0.01%, 20=0.01%, 50=0.05%, 100=0.16%, 250=1.93% 00:12:42.394 lat (usec) : 500=10.91%, 750=16.74%, 1000=21.67% 00:12:42.394 lat (msec) : 2=35.08%, 4=5.32%, 10=7.41%, 20=0.61%, 100=0.05% 00:12:42.394 lat (msec) : 250=0.03%, 500=0.04% 00:12:42.394 cpu : usr=55.10%, sys=32.02%, ctx=13, majf=0, minf=764 00:12:42.394 IO depths : 1=0.1%, 2=0.5%, 4=1.5%, 8=4.7%, 16=16.1%, 32=73.1%, >=64=4.0% 00:12:42.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.394 complete : 0=0.0%, 4=96.9%, 8=0.4%, 16=0.5%, 32=0.9%, 64=1.3%, >=64=0.0% 00:12:42.394 issued rwts: total=0,151537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:42.394 00:12:42.394 Run status group 0 (all jobs): 00:12:42.394 WRITE: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=592MiB (621MB), run=5008-5008msec 00:12:42.394 ----------------------------------------------------- 00:12:42.394 Suppressions used: 00:12:42.394 count bytes template 00:12:42.394 1 11 /usr/src/fio/parse.c 00:12:42.394 1 8 libtcmalloc_minimal.so 00:12:42.394 1 904 libcrypto.so 00:12:42.394 ----------------------------------------------------- 00:12:42.394 00:12:42.654 00:12:42.654 real 0m13.813s 00:12:42.654 user 0m7.622s 00:12:42.654 sys 0m4.638s 00:12:42.654 ************************************ 00:12:42.654 END TEST xnvme_fio_plugin 00:12:42.654 ************************************ 00:12:42.654 23:14:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.654 23:14:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 23:14:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:42.654 23:14:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:42.654 23:14:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:42.654 23:14:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:42.654 23:14:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.654 23:14:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.654 23:14:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 ************************************ 00:12:42.654 START TEST xnvme_rpc 00:12:42.654 ************************************ 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:42.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69466 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69466 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69466 ']' 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.654 23:14:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.654 [2024-11-25 23:14:14.935208] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:12:42.654 [2024-11-25 23:14:14.935612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69466 ] 00:12:42.916 [2024-11-25 23:14:15.100249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.916 [2024-11-25 23:14:15.221807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.876 xnvme_bdev 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.876 23:14:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69466 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69466 ']' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69466 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69466 00:12:43.876 killing process with pid 69466 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69466' 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69466 00:12:43.876 23:14:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69466 00:12:45.791 00:12:45.791 real 0m2.932s 00:12:45.791 user 0m2.903s 00:12:45.791 sys 0m0.486s 00:12:45.791 ************************************ 00:12:45.791 END TEST xnvme_rpc 00:12:45.791 ************************************ 00:12:45.791 23:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.791 23:14:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.791 23:14:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:45.791 23:14:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.791 23:14:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.791 23:14:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.791 ************************************ 00:12:45.791 START TEST xnvme_bdevperf 00:12:45.791 ************************************ 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:45.791 23:14:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:45.791 { 00:12:45.791 "subsystems": [ 00:12:45.791 { 00:12:45.791 "subsystem": "bdev", 00:12:45.791 "config": [ 00:12:45.791 { 00:12:45.791 "params": { 00:12:45.791 "io_mechanism": "libaio", 00:12:45.791 "conserve_cpu": true, 00:12:45.791 "filename": "/dev/nvme0n1", 00:12:45.791 "name": "xnvme_bdev" 00:12:45.791 }, 00:12:45.791 "method": "bdev_xnvme_create" 00:12:45.791 }, 00:12:45.791 { 00:12:45.791 "method": "bdev_wait_for_examine" 00:12:45.791 } 00:12:45.791 ] 00:12:45.791 } 00:12:45.791 ] 00:12:45.791 } 00:12:45.791 [2024-11-25 23:14:17.921144] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:12:45.791 [2024-11-25 23:14:17.921313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69540 ] 00:12:45.791 [2024-11-25 23:14:18.084575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.052 [2024-11-25 23:14:18.206463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.315 Running I/O for 5 seconds... 00:12:48.202 35276.00 IOPS, 137.80 MiB/s [2024-11-25T23:14:21.955Z] 35702.50 IOPS, 139.46 MiB/s [2024-11-25T23:14:22.528Z] 34291.33 IOPS, 133.95 MiB/s [2024-11-25T23:14:23.914Z] 34943.25 IOPS, 136.50 MiB/s [2024-11-25T23:14:23.914Z] 35193.00 IOPS, 137.47 MiB/s 00:12:51.545 Latency(us) 00:12:51.545 [2024-11-25T23:14:23.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.545 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.545 xnvme_bdev : 5.01 35150.27 137.31 0.00 0.00 1814.88 83.89 14115.45 00:12:51.545 [2024-11-25T23:14:23.914Z] =================================================================================================================== 00:12:51.545 [2024-11-25T23:14:23.914Z] Total : 35150.27 137.31 0.00 0.00 1814.88 83.89 14115.45 00:12:52.119 23:14:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:52.119 23:14:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:52.119 23:14:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:52.119 23:14:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:52.119 23:14:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.119 { 00:12:52.119 "subsystems": [ 00:12:52.119 { 00:12:52.119 "subsystem": "bdev", 00:12:52.119 "config": [ 00:12:52.119 { 00:12:52.119 "params": { 00:12:52.119 "io_mechanism": "libaio", 00:12:52.119 "conserve_cpu": true, 00:12:52.119 "filename": "/dev/nvme0n1", 00:12:52.119 "name": "xnvme_bdev" 00:12:52.119 }, 00:12:52.119 "method": "bdev_xnvme_create" 00:12:52.119 }, 00:12:52.119 { 00:12:52.119 "method": "bdev_wait_for_examine" 00:12:52.119 } 00:12:52.119 ] 00:12:52.119 } 00:12:52.119 ] 00:12:52.119 } 00:12:52.119 [2024-11-25 23:14:24.384972] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:12:52.119 [2024-11-25 23:14:24.385165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69615 ] 00:12:52.380 [2024-11-25 23:14:24.545691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.380 [2024-11-25 23:14:24.677548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.642 Running I/O for 5 seconds... 00:12:54.977 3973.00 IOPS, 15.52 MiB/s [2024-11-25T23:14:28.290Z] 3858.00 IOPS, 15.07 MiB/s [2024-11-25T23:14:29.232Z] 3811.33 IOPS, 14.89 MiB/s [2024-11-25T23:14:30.175Z] 3853.00 IOPS, 15.05 MiB/s [2024-11-25T23:14:30.175Z] 3880.60 IOPS, 15.16 MiB/s 00:12:57.806 Latency(us) 00:12:57.806 [2024-11-25T23:14:30.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.806 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:57.806 xnvme_bdev : 5.02 3879.92 15.16 0.00 0.00 16463.54 63.41 58478.28 00:12:57.806 [2024-11-25T23:14:30.175Z] =================================================================================================================== 00:12:57.806 [2024-11-25T23:14:30.175Z] Total : 3879.92 15.16 0.00 0.00 16463.54 63.41 58478.28 00:12:58.750 00:12:58.750 real 0m12.957s 00:12:58.750 user 0m8.327s 00:12:58.750 sys 0m3.430s 00:12:58.750 23:14:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.750 23:14:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.750 ************************************ 00:12:58.750 END TEST xnvme_bdevperf 00:12:58.750 ************************************ 00:12:58.750 23:14:30 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:58.750 23:14:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.750 23:14:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.750 23:14:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.750 ************************************ 00:12:58.750 START TEST xnvme_fio_plugin 00:12:58.750 ************************************ 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:58.750 23:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:58.750 { 00:12:58.750 "subsystems": [ 00:12:58.750 { 00:12:58.750 "subsystem": "bdev", 00:12:58.750 "config": [ 00:12:58.750 { 00:12:58.750 "params": { 00:12:58.750 "io_mechanism": "libaio", 00:12:58.750 "conserve_cpu": true, 00:12:58.750 "filename": "/dev/nvme0n1", 00:12:58.750 "name": "xnvme_bdev" 00:12:58.750 }, 00:12:58.750 "method": "bdev_xnvme_create" 00:12:58.750 }, 00:12:58.750 { 00:12:58.750 "method": "bdev_wait_for_examine" 00:12:58.750 } 00:12:58.750 ] 00:12:58.750 } 00:12:58.750 ] 00:12:58.750 } 00:12:58.750 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:58.750 fio-3.35 00:12:58.750 Starting 1 thread 00:13:05.407 00:13:05.407 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69730: Mon Nov 25 23:14:36 2024 00:13:05.407 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(696MiB/5001msec) 00:13:05.407 slat (usec): min=4, max=2929, avg=19.12, stdev=84.43 00:13:05.407 clat (usec): min=106, max=4565, avg=1273.32, stdev=525.42 00:13:05.407 lat (usec): min=181, max=4658, avg=1292.44, stdev=519.40 00:13:05.407 clat percentiles (usec): 00:13:05.407 | 1.00th=[ 273], 5.00th=[ 482], 10.00th=[ 644], 20.00th=[ 840], 00:13:05.407 | 30.00th=[ 988], 40.00th=[ 1123], 50.00th=[ 1237], 60.00th=[ 1369], 00:13:05.407 | 70.00th=[ 1483], 80.00th=[ 1647], 90.00th=[ 1926], 95.00th=[ 2180], 00:13:05.407 | 99.00th=[ 2900], 99.50th=[ 3195], 99.90th=[ 3687], 99.95th=[ 3818], 00:13:05.407 | 99.99th=[ 4080] 00:13:05.407 bw ( KiB/s): min=133720, max=149584, per=99.85%, avg=142365.33, stdev=4966.25, samples=9 00:13:05.407 iops : min=33430, max=37396, avg=35591.33, stdev=1241.56, samples=9 00:13:05.407 lat (usec) : 250=0.71%, 500=4.71%, 750=9.55%, 1000=15.82% 00:13:05.407 lat (msec) : 2=61.16%, 4=8.04%, 10=0.02% 00:13:05.407 cpu : usr=45.08%, sys=46.40%, ctx=45, majf=0, minf=764 00:13:05.407 IO depths : 1=0.6%, 2=1.3%, 4=3.4%, 8=8.9%, 16=23.3%, 32=60.4%, >=64=2.1% 00:13:05.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.407 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:05.407 issued rwts: total=178265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:05.407 00:13:05.407 Run status group 0 (all jobs): 00:13:05.408 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=696MiB (730MB), run=5001-5001msec 00:13:05.408 ----------------------------------------------------- 00:13:05.408 Suppressions used: 00:13:05.408 count bytes template 00:13:05.408 1 11 /usr/src/fio/parse.c 00:13:05.408 1 8 libtcmalloc_minimal.so 00:13:05.408 1 904 libcrypto.so 00:13:05.408 ----------------------------------------------------- 00:13:05.408 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:05.408 23:14:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.408 { 00:13:05.408 "subsystems": [ 00:13:05.408 { 00:13:05.408 "subsystem": "bdev", 00:13:05.408 "config": [ 00:13:05.408 { 00:13:05.408 "params": { 00:13:05.408 "io_mechanism": "libaio", 00:13:05.408 "conserve_cpu": true, 00:13:05.408 "filename": "/dev/nvme0n1", 00:13:05.408 "name": "xnvme_bdev" 00:13:05.408 }, 00:13:05.408 "method": "bdev_xnvme_create" 00:13:05.408 }, 00:13:05.408 { 00:13:05.408 "method": "bdev_wait_for_examine" 00:13:05.408 } 00:13:05.408 ] 00:13:05.408 } 00:13:05.408 ] 00:13:05.408 } 00:13:05.669 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:05.669 fio-3.35 00:13:05.669 Starting 1 thread 00:13:12.258 00:13:12.258 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69826: Mon Nov 25 23:14:43 2024 00:13:12.258 write: IOPS=36.0k, BW=141MiB/s (147MB/s)(704MiB/5007msec); 0 zone resets 00:13:12.258 slat (usec): min=4, max=1718, avg=19.90, stdev=74.19 00:13:12.258 clat (usec): min=15, max=16760, avg=1261.02, stdev=1120.55 00:13:12.258 lat (usec): min=48, max=16778, avg=1280.92, stdev=1118.23 00:13:12.258 clat percentiles (usec): 00:13:12.258 | 1.00th=[ 223], 5.00th=[ 375], 10.00th=[ 510], 20.00th=[ 685], 00:13:12.258 | 30.00th=[ 807], 40.00th=[ 938], 50.00th=[ 1057], 60.00th=[ 1188], 00:13:12.258 | 70.00th=[ 1336], 80.00th=[ 1549], 90.00th=[ 1926], 95.00th=[ 2376], 00:13:12.258 | 99.00th=[ 7504], 99.50th=[ 8979], 99.90th=[11207], 99.95th=[12256], 00:13:12.258 | 99.99th=[15401] 00:13:12.258 bw ( KiB/s): min=122208, max=158320, per=100.00%, avg=144038.10, stdev=12128.58, samples=10 00:13:12.258 iops : min=30552, max=39580, avg=36009.50, stdev=3032.14, samples=10 00:13:12.258 lat (usec) : 20=0.01%, 50=0.01%, 100=0.04%, 250=1.47%, 500=8.06% 00:13:12.258 lat (usec) : 750=15.65%, 1000=19.75% 00:13:12.258 lat (msec) : 2=46.27%, 4=6.46%, 10=2.05%, 20=0.24% 00:13:12.258 cpu : usr=41.61%, sys=47.98%, ctx=11, majf=0, minf=764 00:13:12.258 IO depths : 1=0.3%, 2=0.8%, 4=2.6%, 8=7.9%, 16=21.9%, 32=63.9%, >=64=2.5% 00:13:12.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.258 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.2%, 32=0.4%, 64=1.6%, >=64=0.0% 00:13:12.258 issued rwts: total=0,180136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.258 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:12.258 00:13:12.258 Run status group 0 (all jobs): 00:13:12.259 WRITE: bw=141MiB/s (147MB/s), 141MiB/s-141MiB/s (147MB/s-147MB/s), io=704MiB (738MB), run=5007-5007msec 00:13:12.259 ----------------------------------------------------- 00:13:12.259 Suppressions used: 00:13:12.259 count bytes template 00:13:12.259 1 11 /usr/src/fio/parse.c 00:13:12.259 1 8 libtcmalloc_minimal.so 00:13:12.259 1 904 libcrypto.so 00:13:12.259 ----------------------------------------------------- 00:13:12.259 00:13:12.259 00:13:12.259 real 0m13.655s 00:13:12.259 user 0m6.989s 00:13:12.259 sys 0m5.325s 00:13:12.259 ************************************ 00:13:12.259 END TEST xnvme_fio_plugin 00:13:12.259 ************************************ 00:13:12.259 23:14:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.259 23:14:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:12.259 23:14:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:12.259 23:14:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.259 23:14:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.259 23:14:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.259 ************************************ 00:13:12.259 START TEST xnvme_rpc 00:13:12.259 ************************************ 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69908 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69908 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69908 ']' 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.259 23:14:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.521 [2024-11-25 23:14:44.676915] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:13:12.521 [2024-11-25 23:14:44.677034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69908 ] 00:13:12.521 [2024-11-25 23:14:44.833564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.782 [2024-11-25 23:14:44.933376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.358 xnvme_bdev 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.358 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69908 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69908 ']' 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69908 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69908 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.625 killing process with pid 69908 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69908' 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69908 00:13:13.625 23:14:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69908 00:13:15.011 00:13:15.011 real 0m2.703s 00:13:15.011 user 0m2.810s 00:13:15.011 sys 0m0.360s 00:13:15.011 23:14:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.011 ************************************ 00:13:15.011 END TEST xnvme_rpc 00:13:15.011 ************************************ 00:13:15.011 23:14:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.011 23:14:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:15.011 23:14:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.011 23:14:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.011 23:14:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.011 ************************************ 00:13:15.011 START TEST xnvme_bdevperf 00:13:15.011 ************************************ 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:15.011 23:14:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:15.272 { 00:13:15.272 "subsystems": [ 00:13:15.272 { 00:13:15.272 "subsystem": "bdev", 00:13:15.272 "config": [ 00:13:15.272 { 00:13:15.272 "params": { 00:13:15.272 "io_mechanism": "io_uring", 00:13:15.272 "conserve_cpu": false, 00:13:15.272 "filename": "/dev/nvme0n1", 00:13:15.272 "name": "xnvme_bdev" 00:13:15.272 }, 00:13:15.272 "method": "bdev_xnvme_create" 00:13:15.272 }, 00:13:15.272 { 00:13:15.272 "method": "bdev_wait_for_examine" 00:13:15.272 } 00:13:15.272 ] 00:13:15.272 } 00:13:15.272 ] 00:13:15.272 } 00:13:15.272 [2024-11-25 23:14:47.421864] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:13:15.272 [2024-11-25 23:14:47.422107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69982 ] 00:13:15.272 [2024-11-25 23:14:47.583437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.533 [2024-11-25 23:14:47.678776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.793 Running I/O for 5 seconds... 00:13:17.678 38994.00 IOPS, 152.32 MiB/s [2024-11-25T23:14:50.987Z] 37885.00 IOPS, 147.99 MiB/s [2024-11-25T23:14:51.959Z] 36366.33 IOPS, 142.06 MiB/s [2024-11-25T23:14:53.347Z] 35155.75 IOPS, 137.33 MiB/s [2024-11-25T23:14:53.347Z] 34270.40 IOPS, 133.87 MiB/s 00:13:20.978 Latency(us) 00:13:20.978 [2024-11-25T23:14:53.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.978 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:20.978 xnvme_bdev : 5.01 34232.26 133.72 0.00 0.00 1863.94 263.09 11796.48 00:13:20.978 [2024-11-25T23:14:53.347Z] =================================================================================================================== 00:13:20.978 [2024-11-25T23:14:53.347Z] Total : 34232.26 133.72 0.00 0.00 1863.94 263.09 11796.48 00:13:21.550 23:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:21.550 23:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:21.550 23:14:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:21.550 23:14:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:21.550 23:14:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:21.550 { 00:13:21.550 "subsystems": [ 00:13:21.550 { 00:13:21.550 "subsystem": "bdev", 00:13:21.550 "config": [ 00:13:21.550 { 00:13:21.550 "params": { 00:13:21.550 "io_mechanism": "io_uring", 00:13:21.550 "conserve_cpu": false, 00:13:21.550 "filename": "/dev/nvme0n1", 00:13:21.550 "name": "xnvme_bdev" 00:13:21.550 }, 00:13:21.550 "method": "bdev_xnvme_create" 00:13:21.550 }, 00:13:21.550 { 00:13:21.550 "method": "bdev_wait_for_examine" 00:13:21.550 } 00:13:21.550 ] 00:13:21.550 } 00:13:21.550 ] 00:13:21.550 } 00:13:21.550 [2024-11-25 23:14:53.800623] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:13:21.550 [2024-11-25 23:14:53.800765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70052 ] 00:13:21.811 [2024-11-25 23:14:53.964420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.811 [2024-11-25 23:14:54.083548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.073 Running I/O for 5 seconds... 00:13:24.401 5497.00 IOPS, 21.47 MiB/s [2024-11-25T23:14:57.714Z] 6134.00 IOPS, 23.96 MiB/s [2024-11-25T23:14:58.655Z] 6290.67 IOPS, 24.57 MiB/s [2024-11-25T23:14:59.596Z] 6284.75 IOPS, 24.55 MiB/s [2024-11-25T23:14:59.596Z] 6980.60 IOPS, 27.27 MiB/s 00:13:27.227 Latency(us) 00:13:27.227 [2024-11-25T23:14:59.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.227 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:27.227 xnvme_bdev : 5.01 6986.06 27.29 0.00 0.00 9151.98 57.90 154866.61 00:13:27.227 [2024-11-25T23:14:59.596Z] =================================================================================================================== 00:13:27.227 [2024-11-25T23:14:59.596Z] Total : 6986.06 27.29 0.00 0.00 9151.98 57.90 154866.61 00:13:27.798 00:13:27.798 real 0m12.719s 00:13:27.798 user 0m5.700s 00:13:27.798 sys 0m6.753s 00:13:27.798 23:15:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.798 ************************************ 00:13:27.798 END TEST xnvme_bdevperf 00:13:27.798 ************************************ 00:13:27.798 23:15:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.798 23:15:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:27.798 23:15:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.798 23:15:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.798 23:15:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.798 ************************************ 00:13:27.798 START TEST xnvme_fio_plugin 00:13:27.798 ************************************ 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:27.798 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.059 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.059 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.059 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:28.059 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.059 23:15:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.059 { 00:13:28.059 "subsystems": [ 00:13:28.059 { 00:13:28.059 "subsystem": "bdev", 00:13:28.059 "config": [ 00:13:28.059 { 00:13:28.059 "params": { 00:13:28.059 "io_mechanism": "io_uring", 00:13:28.059 "conserve_cpu": false, 00:13:28.059 "filename": "/dev/nvme0n1", 00:13:28.059 "name": "xnvme_bdev" 00:13:28.059 }, 00:13:28.059 "method": "bdev_xnvme_create" 00:13:28.059 }, 00:13:28.059 { 00:13:28.059 "method": "bdev_wait_for_examine" 00:13:28.059 } 00:13:28.059 ] 00:13:28.059 } 00:13:28.059 ] 00:13:28.059 } 00:13:28.059 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:28.059 fio-3.35 00:13:28.059 Starting 1 thread 00:13:34.651 00:13:34.651 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70171: Mon Nov 25 23:15:05 2024 00:13:34.651 read: IOPS=42.3k, BW=165MiB/s (173MB/s)(828MiB/5003msec) 00:13:34.651 slat (usec): min=2, max=327, avg= 3.67, stdev= 2.29 00:13:34.651 clat (usec): min=355, max=13370, avg=1366.27, stdev=423.48 00:13:34.651 lat (usec): min=364, max=13373, avg=1369.94, stdev=423.94 00:13:34.651 clat percentiles (usec): 00:13:34.651 | 1.00th=[ 824], 5.00th=[ 922], 10.00th=[ 988], 20.00th=[ 1057], 00:13:34.651 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1237], 60.00th=[ 1369], 00:13:34.651 | 70.00th=[ 1500], 80.00th=[ 1663], 90.00th=[ 1860], 95.00th=[ 2040], 00:13:34.651 | 99.00th=[ 2671], 99.50th=[ 3163], 99.90th=[ 4424], 99.95th=[ 5669], 00:13:34.651 | 99.99th=[10552] 00:13:34.651 bw ( KiB/s): min=131720, max=219720, per=100.00%, avg=174045.33, stdev=33033.56, samples=9 00:13:34.651 iops : min=32930, max=54930, avg=43511.56, stdev=8258.85, samples=9 00:13:34.651 lat (usec) : 500=0.04%, 750=0.19%, 1000=10.78% 00:13:34.651 lat (msec) : 2=83.29%, 4=5.55%, 10=0.15%, 20=0.01% 00:13:34.651 cpu : usr=33.91%, sys=64.81%, ctx=10, majf=0, minf=762 00:13:34.651 IO depths : 1=1.4%, 2=2.8%, 4=5.8%, 8=12.0%, 16=24.7%, 32=51.6%, >=64=1.7% 00:13:34.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.652 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:34.652 issued rwts: total=211870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:34.652 00:13:34.652 Run status group 0 (all jobs): 00:13:34.652 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=828MiB (868MB), run=5003-5003msec 00:13:34.652 ----------------------------------------------------- 00:13:34.652 Suppressions used: 00:13:34.652 count bytes template 00:13:34.652 1 11 /usr/src/fio/parse.c 00:13:34.652 1 8 libtcmalloc_minimal.so 00:13:34.652 1 904 libcrypto.so 00:13:34.652 ----------------------------------------------------- 00:13:34.652 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:34.652 23:15:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.652 { 00:13:34.652 "subsystems": [ 00:13:34.652 { 00:13:34.652 "subsystem": "bdev", 00:13:34.652 "config": [ 00:13:34.652 { 00:13:34.652 "params": { 00:13:34.652 "io_mechanism": "io_uring", 00:13:34.652 "conserve_cpu": false, 00:13:34.652 "filename": "/dev/nvme0n1", 00:13:34.652 "name": "xnvme_bdev" 00:13:34.652 }, 00:13:34.652 "method": "bdev_xnvme_create" 00:13:34.652 }, 00:13:34.652 { 00:13:34.652 "method": "bdev_wait_for_examine" 00:13:34.652 } 00:13:34.652 ] 00:13:34.652 } 00:13:34.652 ] 00:13:34.652 } 00:13:34.913 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:34.913 fio-3.35 00:13:34.913 Starting 1 thread 00:13:41.544 00:13:41.544 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70263: Mon Nov 25 23:15:12 2024 00:13:41.544 write: IOPS=35.2k, BW=138MiB/s (144MB/s)(688MiB/5002msec); 0 zone resets 00:13:41.544 slat (usec): min=2, max=100, avg= 4.24, stdev= 2.67 00:13:41.544 clat (usec): min=299, max=5237, avg=1643.29, stdev=321.86 00:13:41.544 lat (usec): min=305, max=5242, avg=1647.53, stdev=322.51 00:13:41.544 clat percentiles (usec): 00:13:41.544 | 1.00th=[ 1004], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1385], 00:13:41.544 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1680], 00:13:41.544 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2057], 95.00th=[ 2212], 00:13:41.544 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3261], 99.95th=[ 3425], 00:13:41.544 | 99.99th=[ 4621] 00:13:41.544 bw ( KiB/s): min=130880, max=151360, per=99.08%, avg=139571.78, stdev=6374.85, samples=9 00:13:41.544 iops : min=32720, max=37840, avg=34892.89, stdev=1593.72, samples=9 00:13:41.544 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.93% 00:13:41.544 lat (msec) : 2=86.46%, 4=12.58%, 10=0.01% 00:13:41.544 cpu : usr=34.89%, sys=63.57%, ctx=16, majf=0, minf=762 00:13:41.544 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:41.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.544 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:41.544 issued rwts: total=0,176153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.544 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:41.544 00:13:41.544 Run status group 0 (all jobs): 00:13:41.544 WRITE: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=688MiB (722MB), run=5002-5002msec 00:13:41.544 ----------------------------------------------------- 00:13:41.544 Suppressions used: 00:13:41.544 count bytes template 00:13:41.544 1 11 /usr/src/fio/parse.c 00:13:41.544 1 8 libtcmalloc_minimal.so 00:13:41.544 1 904 libcrypto.so 00:13:41.544 ----------------------------------------------------- 00:13:41.544 00:13:41.544 ************************************ 00:13:41.544 END TEST xnvme_fio_plugin 00:13:41.544 ************************************ 00:13:41.544 00:13:41.544 real 0m13.615s 00:13:41.544 user 0m6.199s 00:13:41.544 sys 0m6.949s 00:13:41.544 23:15:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.544 23:15:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:41.544 23:15:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:41.544 23:15:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:41.544 23:15:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:41.544 23:15:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:41.544 23:15:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.544 23:15:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.544 23:15:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.544 ************************************ 00:13:41.544 START TEST xnvme_rpc 00:13:41.544 ************************************ 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70349 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70349 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70349 ']' 00:13:41.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.544 23:15:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.544 [2024-11-25 23:15:13.877148] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:13:41.544 [2024-11-25 23:15:13.877407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70349 ] 00:13:41.805 [2024-11-25 23:15:14.038292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.806 [2024-11-25 23:15:14.133372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.378 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.378 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:42.378 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:42.378 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.379 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.639 xnvme_bdev 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70349 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70349 ']' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70349 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70349 00:13:42.639 killing process with pid 70349 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70349' 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70349 00:13:42.639 23:15:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70349 00:13:44.550 00:13:44.550 real 0m2.692s 00:13:44.550 user 0m2.790s 00:13:44.550 sys 0m0.367s 00:13:44.550 ************************************ 00:13:44.550 END TEST xnvme_rpc 00:13:44.550 ************************************ 00:13:44.550 23:15:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.550 23:15:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.550 23:15:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:44.551 23:15:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.551 23:15:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.551 23:15:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.551 ************************************ 00:13:44.551 START TEST xnvme_bdevperf 00:13:44.551 ************************************ 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:44.551 23:15:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.551 { 00:13:44.551 "subsystems": [ 00:13:44.551 { 00:13:44.551 "subsystem": "bdev", 00:13:44.551 "config": [ 00:13:44.551 { 00:13:44.551 "params": { 00:13:44.551 "io_mechanism": "io_uring", 00:13:44.551 "conserve_cpu": true, 00:13:44.551 "filename": "/dev/nvme0n1", 00:13:44.551 "name": "xnvme_bdev" 00:13:44.551 }, 00:13:44.551 "method": "bdev_xnvme_create" 00:13:44.551 }, 00:13:44.551 { 00:13:44.551 "method": "bdev_wait_for_examine" 00:13:44.551 } 00:13:44.551 ] 00:13:44.551 } 00:13:44.551 ] 00:13:44.551 } 00:13:44.551 [2024-11-25 23:15:16.627715] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:13:44.551 [2024-11-25 23:15:16.627829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70412 ] 00:13:44.551 [2024-11-25 23:15:16.780233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.551 [2024-11-25 23:15:16.894372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.811 Running I/O for 5 seconds... 00:13:47.137 44673.00 IOPS, 174.50 MiB/s [2024-11-25T23:15:20.451Z] 41947.00 IOPS, 163.86 MiB/s [2024-11-25T23:15:21.395Z] 41172.00 IOPS, 160.83 MiB/s [2024-11-25T23:15:22.339Z] 41007.00 IOPS, 160.18 MiB/s [2024-11-25T23:15:22.339Z] 40754.40 IOPS, 159.20 MiB/s 00:13:49.970 Latency(us) 00:13:49.970 [2024-11-25T23:15:22.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.970 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:49.970 xnvme_bdev : 5.01 40731.03 159.11 0.00 0.00 1567.42 381.24 14417.92 00:13:49.970 [2024-11-25T23:15:22.339Z] =================================================================================================================== 00:13:49.970 [2024-11-25T23:15:22.339Z] Total : 40731.03 159.11 0.00 0.00 1567.42 381.24 14417.92 00:13:50.914 23:15:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.914 23:15:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:50.914 23:15:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:50.914 23:15:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:50.914 23:15:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.914 { 00:13:50.914 "subsystems": [ 00:13:50.914 { 00:13:50.914 "subsystem": "bdev", 00:13:50.914 "config": [ 00:13:50.914 { 00:13:50.914 "params": { 00:13:50.914 "io_mechanism": "io_uring", 00:13:50.914 "conserve_cpu": true, 00:13:50.914 "filename": "/dev/nvme0n1", 00:13:50.914 "name": "xnvme_bdev" 00:13:50.914 }, 00:13:50.914 "method": "bdev_xnvme_create" 00:13:50.914 }, 00:13:50.914 { 00:13:50.914 "method": "bdev_wait_for_examine" 00:13:50.914 } 00:13:50.914 ] 00:13:50.914 } 00:13:50.914 ] 00:13:50.914 } 00:13:50.914 [2024-11-25 23:15:23.028196] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:13:50.914 [2024-11-25 23:15:23.028343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70492 ] 00:13:50.914 [2024-11-25 23:15:23.189814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.176 [2024-11-25 23:15:23.323149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.437 Running I/O for 5 seconds... 00:13:53.322 33775.00 IOPS, 131.93 MiB/s [2024-11-25T23:15:26.635Z] 34392.00 IOPS, 134.34 MiB/s [2024-11-25T23:15:27.644Z] 35197.67 IOPS, 137.49 MiB/s [2024-11-25T23:15:29.042Z] 35395.00 IOPS, 138.26 MiB/s 00:13:56.673 Latency(us) 00:13:56.673 [2024-11-25T23:15:29.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.673 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:56.673 xnvme_bdev : 5.00 36032.89 140.75 0.00 0.00 1770.96 705.77 8217.21 00:13:56.673 [2024-11-25T23:15:29.042Z] =================================================================================================================== 00:13:56.673 [2024-11-25T23:15:29.042Z] Total : 36032.89 140.75 0.00 0.00 1770.96 705.77 8217.21 00:13:57.245 00:13:57.245 real 0m12.839s 00:13:57.245 user 0m7.221s 00:13:57.245 sys 0m4.866s 00:13:57.245 ************************************ 00:13:57.245 END TEST xnvme_bdevperf 00:13:57.245 ************************************ 00:13:57.245 23:15:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.245 23:15:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.245 23:15:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:57.245 23:15:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.245 23:15:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.245 23:15:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.245 ************************************ 00:13:57.245 START TEST xnvme_fio_plugin 00:13:57.245 ************************************ 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:57.245 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:57.246 23:15:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.246 { 00:13:57.246 "subsystems": [ 00:13:57.246 { 00:13:57.246 "subsystem": "bdev", 00:13:57.246 "config": [ 00:13:57.246 { 00:13:57.246 "params": { 00:13:57.246 "io_mechanism": "io_uring", 00:13:57.246 "conserve_cpu": true, 00:13:57.246 "filename": "/dev/nvme0n1", 00:13:57.246 "name": "xnvme_bdev" 00:13:57.246 }, 00:13:57.246 "method": "bdev_xnvme_create" 00:13:57.246 }, 00:13:57.246 { 00:13:57.246 "method": "bdev_wait_for_examine" 00:13:57.246 } 00:13:57.246 ] 00:13:57.246 } 00:13:57.246 ] 00:13:57.246 } 00:13:57.507 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:57.507 fio-3.35 00:13:57.507 Starting 1 thread 00:14:04.094 00:14:04.094 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70612: Mon Nov 25 23:15:35 2024 00:14:04.094 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(692MiB/5002msec) 00:14:04.094 slat (usec): min=2, max=101, avg= 3.95, stdev= 2.47 00:14:04.094 clat (usec): min=801, max=3137, avg=1644.10, stdev=275.85 00:14:04.094 lat (usec): min=804, max=3166, avg=1648.05, stdev=276.47 00:14:04.094 clat percentiles (usec): 00:14:04.094 | 1.00th=[ 1057], 5.00th=[ 1237], 10.00th=[ 1336], 20.00th=[ 1434], 00:14:04.094 | 30.00th=[ 1500], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:14:04.094 | 70.00th=[ 1745], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:14:04.094 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2802], 99.95th=[ 2933], 00:14:04.094 | 99.99th=[ 3064] 00:14:04.094 bw ( KiB/s): min=130810, max=144896, per=98.17%, avg=139121.11, stdev=4298.73, samples=9 00:14:04.094 iops : min=32702, max=36224, avg=34780.22, stdev=1074.80, samples=9 00:14:04.094 lat (usec) : 1000=0.45% 00:14:04.094 lat (msec) : 2=88.88%, 4=10.67% 00:14:04.094 cpu : usr=43.49%, sys=51.73%, ctx=10, majf=0, minf=762 00:14:04.094 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:04.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.094 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:04.094 issued rwts: total=177216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.094 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:04.094 00:14:04.094 Run status group 0 (all jobs): 00:14:04.094 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=692MiB (726MB), run=5002-5002msec 00:14:04.094 ----------------------------------------------------- 00:14:04.094 Suppressions used: 00:14:04.094 count bytes template 00:14:04.094 1 11 /usr/src/fio/parse.c 00:14:04.094 1 8 libtcmalloc_minimal.so 00:14:04.094 1 904 libcrypto.so 00:14:04.094 ----------------------------------------------------- 00:14:04.094 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:04.094 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:04.095 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:04.095 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:04.095 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:04.095 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:04.095 23:15:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:04.095 { 00:14:04.095 "subsystems": [ 00:14:04.095 { 00:14:04.095 "subsystem": "bdev", 00:14:04.095 "config": [ 00:14:04.095 { 00:14:04.095 "params": { 00:14:04.095 "io_mechanism": "io_uring", 00:14:04.095 "conserve_cpu": true, 00:14:04.095 "filename": "/dev/nvme0n1", 00:14:04.095 "name": "xnvme_bdev" 00:14:04.095 }, 00:14:04.095 "method": "bdev_xnvme_create" 00:14:04.095 }, 00:14:04.095 { 00:14:04.095 "method": "bdev_wait_for_examine" 00:14:04.095 } 00:14:04.095 ] 00:14:04.095 } 00:14:04.095 ] 00:14:04.095 } 00:14:04.356 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:04.356 fio-3.35 00:14:04.356 Starting 1 thread 00:14:10.945 00:14:10.945 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70698: Mon Nov 25 23:15:42 2024 00:14:10.945 write: IOPS=36.4k, BW=142MiB/s (149MB/s)(711MiB/5004msec); 0 zone resets 00:14:10.945 slat (usec): min=2, max=365, avg= 3.99, stdev= 2.78 00:14:10.945 clat (usec): min=84, max=15120, avg=1597.14, stdev=550.26 00:14:10.945 lat (usec): min=88, max=15134, avg=1601.13, stdev=550.50 00:14:10.945 clat percentiles (usec): 00:14:10.945 | 1.00th=[ 922], 5.00th=[ 1106], 10.00th=[ 1205], 20.00th=[ 1319], 00:14:10.945 | 30.00th=[ 1401], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:14:10.945 | 70.00th=[ 1696], 80.00th=[ 1811], 90.00th=[ 1991], 95.00th=[ 2147], 00:14:10.945 | 99.00th=[ 2606], 99.50th=[ 2999], 99.90th=[10552], 99.95th=[11469], 00:14:10.945 | 99.99th=[12780] 00:14:10.945 bw ( KiB/s): min=137160, max=170224, per=100.00%, avg=147821.33, stdev=12970.20, samples=9 00:14:10.945 iops : min=34290, max=42556, avg=36955.33, stdev=3242.55, samples=9 00:14:10.945 lat (usec) : 100=0.01%, 250=0.01%, 500=0.04%, 750=0.27%, 1000=1.74% 00:14:10.945 lat (msec) : 2=88.45%, 4=9.18%, 10=0.16%, 20=0.14% 00:14:10.945 cpu : usr=50.69%, sys=44.67%, ctx=10, majf=0, minf=762 00:14:10.945 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.8%, 32=50.5%, >=64=1.7% 00:14:10.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.945 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:10.945 issued rwts: total=0,182003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:10.945 00:14:10.945 Run status group 0 (all jobs): 00:14:10.945 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=711MiB (745MB), run=5004-5004msec 00:14:10.945 ----------------------------------------------------- 00:14:10.945 Suppressions used: 00:14:10.945 count bytes template 00:14:10.945 1 11 /usr/src/fio/parse.c 00:14:10.945 1 8 libtcmalloc_minimal.so 00:14:10.945 1 904 libcrypto.so 00:14:10.945 ----------------------------------------------------- 00:14:10.945 00:14:10.945 00:14:10.945 real 0m13.794s 00:14:10.945 user 0m7.588s 00:14:10.945 sys 0m5.415s 00:14:10.945 ************************************ 00:14:10.945 23:15:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.945 23:15:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.945 END TEST xnvme_fio_plugin 00:14:10.945 ************************************ 00:14:11.206 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:11.207 23:15:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:11.207 23:15:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.207 23:15:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.207 23:15:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.207 ************************************ 00:14:11.207 START TEST xnvme_rpc 00:14:11.207 ************************************ 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:11.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70784 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70784 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70784 ']' 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.207 23:15:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.207 [2024-11-25 23:15:43.426437] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:11.207 [2024-11-25 23:15:43.426856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:14:11.469 [2024-11-25 23:15:43.591483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.469 [2024-11-25 23:15:43.714940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 xnvme_bdev 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70784 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70784 ']' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70784 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70784 00:14:12.412 killing process with pid 70784 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70784' 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70784 00:14:12.412 23:15:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70784 00:14:14.329 ************************************ 00:14:14.329 END TEST xnvme_rpc 00:14:14.329 ************************************ 00:14:14.329 00:14:14.329 real 0m2.900s 00:14:14.329 user 0m2.907s 00:14:14.329 sys 0m0.463s 00:14:14.329 23:15:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.329 23:15:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 23:15:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:14.329 23:15:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.329 23:15:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.329 23:15:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 ************************************ 00:14:14.329 START TEST xnvme_bdevperf 00:14:14.329 ************************************ 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.329 23:15:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.329 { 00:14:14.329 "subsystems": [ 00:14:14.329 { 00:14:14.329 "subsystem": "bdev", 00:14:14.329 "config": [ 00:14:14.329 { 00:14:14.329 "params": { 00:14:14.329 "io_mechanism": "io_uring_cmd", 00:14:14.329 "conserve_cpu": false, 00:14:14.329 "filename": "/dev/ng0n1", 00:14:14.329 "name": "xnvme_bdev" 00:14:14.329 }, 00:14:14.329 "method": "bdev_xnvme_create" 00:14:14.329 }, 00:14:14.329 { 00:14:14.329 "method": "bdev_wait_for_examine" 00:14:14.329 } 00:14:14.329 ] 00:14:14.329 } 00:14:14.329 ] 00:14:14.329 } 00:14:14.329 [2024-11-25 23:15:46.371009] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:14.329 [2024-11-25 23:15:46.371178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:14:14.329 [2024-11-25 23:15:46.536026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.329 [2024-11-25 23:15:46.663720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.591 Running I/O for 5 seconds... 00:14:16.632 34775.00 IOPS, 135.84 MiB/s [2024-11-25T23:15:50.380Z] 34603.00 IOPS, 135.17 MiB/s [2024-11-25T23:15:51.321Z] 34417.00 IOPS, 134.44 MiB/s [2024-11-25T23:15:52.264Z] 34367.50 IOPS, 134.25 MiB/s 00:14:19.895 Latency(us) 00:14:19.895 [2024-11-25T23:15:52.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.895 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:19.895 xnvme_bdev : 5.00 34191.04 133.56 0.00 0.00 1867.56 395.42 6906.49 00:14:19.895 [2024-11-25T23:15:52.264Z] =================================================================================================================== 00:14:19.895 [2024-11-25T23:15:52.264Z] Total : 34191.04 133.56 0.00 0.00 1867.56 395.42 6906.49 00:14:20.468 23:15:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.468 23:15:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:20.468 23:15:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.468 23:15:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.468 23:15:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.468 { 00:14:20.468 "subsystems": [ 00:14:20.468 { 00:14:20.468 "subsystem": "bdev", 00:14:20.468 "config": [ 00:14:20.468 { 00:14:20.468 "params": { 00:14:20.468 "io_mechanism": "io_uring_cmd", 00:14:20.468 "conserve_cpu": false, 00:14:20.468 "filename": "/dev/ng0n1", 00:14:20.468 "name": "xnvme_bdev" 00:14:20.468 }, 00:14:20.468 "method": "bdev_xnvme_create" 00:14:20.468 }, 00:14:20.468 { 00:14:20.468 "method": "bdev_wait_for_examine" 00:14:20.468 } 00:14:20.468 ] 00:14:20.468 } 00:14:20.468 ] 00:14:20.468 } 00:14:20.468 [2024-11-25 23:15:52.803049] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:20.468 [2024-11-25 23:15:52.803208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:14:20.729 [2024-11-25 23:15:52.967953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.990 [2024-11-25 23:15:53.095841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.252 Running I/O for 5 seconds... 00:14:23.143 11054.00 IOPS, 43.18 MiB/s [2024-11-25T23:15:56.455Z] 11771.50 IOPS, 45.98 MiB/s [2024-11-25T23:15:57.845Z] 11839.00 IOPS, 46.25 MiB/s [2024-11-25T23:15:58.418Z] 11691.25 IOPS, 45.67 MiB/s [2024-11-25T23:15:58.418Z] 11736.40 IOPS, 45.85 MiB/s 00:14:26.049 Latency(us) 00:14:26.049 [2024-11-25T23:15:58.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.049 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:26.049 xnvme_bdev : 5.01 11724.91 45.80 0.00 0.00 5449.96 65.38 23088.84 00:14:26.049 [2024-11-25T23:15:58.418Z] =================================================================================================================== 00:14:26.049 [2024-11-25T23:15:58.418Z] Total : 11724.91 45.80 0.00 0.00 5449.96 65.38 23088.84 00:14:26.990 23:15:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.990 23:15:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:26.990 23:15:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:26.990 23:15:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:26.990 23:15:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 { 00:14:26.990 "subsystems": [ 00:14:26.990 { 00:14:26.990 "subsystem": "bdev", 00:14:26.990 "config": [ 00:14:26.990 { 00:14:26.990 "params": { 00:14:26.990 "io_mechanism": "io_uring_cmd", 00:14:26.990 "conserve_cpu": false, 00:14:26.990 "filename": "/dev/ng0n1", 00:14:26.990 "name": "xnvme_bdev" 00:14:26.990 }, 00:14:26.990 "method": "bdev_xnvme_create" 00:14:26.990 }, 00:14:26.990 { 00:14:26.990 "method": "bdev_wait_for_examine" 00:14:26.990 } 00:14:26.990 ] 00:14:26.990 } 00:14:26.990 ] 00:14:26.990 } 00:14:26.990 [2024-11-25 23:15:59.295843] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:26.990 [2024-11-25 23:15:59.295953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:14:27.251 [2024-11-25 23:15:59.456104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.251 [2024-11-25 23:15:59.558829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.512 Running I/O for 5 seconds... 00:14:29.468 80576.00 IOPS, 314.75 MiB/s [2024-11-25T23:16:03.224Z] 85728.00 IOPS, 334.88 MiB/s [2024-11-25T23:16:04.166Z] 88682.67 IOPS, 346.42 MiB/s [2024-11-25T23:16:05.107Z] 90384.00 IOPS, 353.06 MiB/s [2024-11-25T23:16:05.107Z] 91379.20 IOPS, 356.95 MiB/s 00:14:32.738 Latency(us) 00:14:32.738 [2024-11-25T23:16:05.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.739 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:32.739 xnvme_bdev : 5.00 91347.91 356.83 0.00 0.00 697.60 415.90 2369.38 00:14:32.739 [2024-11-25T23:16:05.108Z] =================================================================================================================== 00:14:32.739 [2024-11-25T23:16:05.108Z] Total : 91347.91 356.83 0.00 0.00 697.60 415.90 2369.38 00:14:33.353 23:16:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.353 23:16:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:33.354 23:16:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:33.354 23:16:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.354 23:16:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.354 { 00:14:33.354 "subsystems": [ 00:14:33.354 { 00:14:33.354 "subsystem": "bdev", 00:14:33.354 "config": [ 00:14:33.354 { 00:14:33.354 "params": { 00:14:33.354 "io_mechanism": "io_uring_cmd", 00:14:33.354 "conserve_cpu": false, 00:14:33.354 "filename": "/dev/ng0n1", 00:14:33.354 "name": "xnvme_bdev" 00:14:33.354 }, 00:14:33.354 "method": "bdev_xnvme_create" 00:14:33.354 }, 00:14:33.354 { 00:14:33.354 "method": "bdev_wait_for_examine" 00:14:33.354 } 00:14:33.354 ] 00:14:33.354 } 00:14:33.354 ] 00:14:33.354 } 00:14:33.354 [2024-11-25 23:16:05.475794] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:33.354 [2024-11-25 23:16:05.475907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71081 ] 00:14:33.354 [2024-11-25 23:16:05.631167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.354 [2024-11-25 23:16:05.715448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.611 Running I/O for 5 seconds... 00:14:35.910 86267.00 IOPS, 336.98 MiB/s [2024-11-25T23:16:09.213Z] 84848.50 IOPS, 331.44 MiB/s [2024-11-25T23:16:10.149Z] 82093.33 IOPS, 320.68 MiB/s [2024-11-25T23:16:11.088Z] 78186.25 IOPS, 305.42 MiB/s [2024-11-25T23:16:11.088Z] 74257.40 IOPS, 290.07 MiB/s 00:14:38.719 Latency(us) 00:14:38.719 [2024-11-25T23:16:11.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.719 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:38.719 xnvme_bdev : 5.00 74208.93 289.88 0.00 0.00 858.72 159.90 10536.17 00:14:38.719 [2024-11-25T23:16:11.088Z] =================================================================================================================== 00:14:38.719 [2024-11-25T23:16:11.088Z] Total : 74208.93 289.88 0.00 0.00 858.72 159.90 10536.17 00:14:39.663 ************************************ 00:14:39.663 END TEST xnvme_bdevperf 00:14:39.663 ************************************ 00:14:39.663 00:14:39.663 real 0m25.364s 00:14:39.663 user 0m14.416s 00:14:39.663 sys 0m10.474s 00:14:39.663 23:16:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.663 23:16:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.663 23:16:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:39.663 23:16:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:39.663 23:16:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.663 23:16:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.663 ************************************ 00:14:39.663 START TEST xnvme_fio_plugin 00:14:39.663 ************************************ 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:39.663 23:16:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:39.663 { 00:14:39.663 "subsystems": [ 00:14:39.663 { 00:14:39.663 "subsystem": "bdev", 00:14:39.663 "config": [ 00:14:39.663 { 00:14:39.663 "params": { 00:14:39.663 "io_mechanism": "io_uring_cmd", 00:14:39.663 "conserve_cpu": false, 00:14:39.663 "filename": "/dev/ng0n1", 00:14:39.663 "name": "xnvme_bdev" 00:14:39.663 }, 00:14:39.663 "method": "bdev_xnvme_create" 00:14:39.663 }, 00:14:39.663 { 00:14:39.663 "method": "bdev_wait_for_examine" 00:14:39.663 } 00:14:39.663 ] 00:14:39.663 } 00:14:39.663 ] 00:14:39.663 } 00:14:39.663 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:39.663 fio-3.35 00:14:39.663 Starting 1 thread 00:14:46.250 00:14:46.250 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71194: Mon Nov 25 23:16:17 2024 00:14:46.250 read: IOPS=40.5k, BW=158MiB/s (166MB/s)(791MiB/5001msec) 00:14:46.250 slat (nsec): min=2725, max=69621, avg=3886.63, stdev=2252.31 00:14:46.250 clat (usec): min=595, max=4259, avg=1424.64, stdev=377.77 00:14:46.250 lat (usec): min=597, max=4316, avg=1428.52, stdev=378.17 00:14:46.250 clat percentiles (usec): 00:14:46.250 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 914], 20.00th=[ 1074], 00:14:46.250 | 30.00th=[ 1205], 40.00th=[ 1319], 50.00th=[ 1434], 60.00th=[ 1532], 00:14:46.250 | 70.00th=[ 1631], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2040], 00:14:46.250 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 2868], 99.95th=[ 3064], 00:14:46.250 | 99.99th=[ 4080] 00:14:46.250 bw ( KiB/s): min=143360, max=204800, per=99.58%, avg=161233.00, stdev=19270.81, samples=9 00:14:46.250 iops : min=35840, max=51200, avg=40308.22, stdev=4817.71, samples=9 00:14:46.250 lat (usec) : 750=2.38%, 1000=12.56% 00:14:46.250 lat (msec) : 2=79.11%, 4=5.94%, 10=0.01% 00:14:46.250 cpu : usr=40.18%, sys=58.62%, ctx=11, majf=0, minf=762 00:14:46.250 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:46.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.250 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:46.250 issued rwts: total=202432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.250 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:46.250 00:14:46.250 Run status group 0 (all jobs): 00:14:46.250 READ: bw=158MiB/s (166MB/s), 158MiB/s-158MiB/s (166MB/s-166MB/s), io=791MiB (829MB), run=5001-5001msec 00:14:46.250 ----------------------------------------------------- 00:14:46.250 Suppressions used: 00:14:46.250 count bytes template 00:14:46.250 1 11 /usr/src/fio/parse.c 00:14:46.250 1 8 libtcmalloc_minimal.so 00:14:46.250 1 904 libcrypto.so 00:14:46.250 ----------------------------------------------------- 00:14:46.250 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:46.250 23:16:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:46.250 { 00:14:46.250 "subsystems": [ 00:14:46.250 { 00:14:46.250 "subsystem": "bdev", 00:14:46.250 "config": [ 00:14:46.250 { 00:14:46.250 "params": { 00:14:46.250 "io_mechanism": "io_uring_cmd", 00:14:46.250 "conserve_cpu": false, 00:14:46.250 "filename": "/dev/ng0n1", 00:14:46.250 "name": "xnvme_bdev" 00:14:46.250 }, 00:14:46.250 "method": "bdev_xnvme_create" 00:14:46.250 }, 00:14:46.250 { 00:14:46.250 "method": "bdev_wait_for_examine" 00:14:46.250 } 00:14:46.250 ] 00:14:46.250 } 00:14:46.250 ] 00:14:46.250 } 00:14:46.513 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:46.513 fio-3.35 00:14:46.513 Starting 1 thread 00:14:53.104 00:14:53.104 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71279: Mon Nov 25 23:16:24 2024 00:14:53.104 write: IOPS=38.1k, BW=149MiB/s (156MB/s)(745MiB/5001msec); 0 zone resets 00:14:53.104 slat (nsec): min=2789, max=83892, avg=3993.37, stdev=2472.85 00:14:53.104 clat (usec): min=212, max=4933, avg=1518.50, stdev=315.27 00:14:53.104 lat (usec): min=215, max=4959, avg=1522.49, stdev=315.79 00:14:53.104 clat percentiles (usec): 00:14:53.104 | 1.00th=[ 865], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1270], 00:14:53.104 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1565], 00:14:53.104 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2040], 00:14:53.104 | 99.00th=[ 2376], 99.50th=[ 2606], 99.90th=[ 3425], 99.95th=[ 3785], 00:14:53.104 | 99.99th=[ 4686] 00:14:53.104 bw ( KiB/s): min=143128, max=163400, per=99.51%, avg=151827.56, stdev=6004.54, samples=9 00:14:53.104 iops : min=35782, max=40850, avg=37956.89, stdev=1501.14, samples=9 00:14:53.104 lat (usec) : 250=0.01%, 500=0.04%, 750=0.47%, 1000=2.44% 00:14:53.104 lat (msec) : 2=90.96%, 4=6.05%, 10=0.04% 00:14:53.104 cpu : usr=39.28%, sys=59.34%, ctx=24, majf=0, minf=762 00:14:53.104 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=12.0%, 16=24.3%, 32=51.7%, >=64=1.7% 00:14:53.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.104 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:53.104 issued rwts: total=0,190747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.104 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:53.104 00:14:53.104 Run status group 0 (all jobs): 00:14:53.104 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=745MiB (781MB), run=5001-5001msec 00:14:53.104 ----------------------------------------------------- 00:14:53.104 Suppressions used: 00:14:53.104 count bytes template 00:14:53.104 1 11 /usr/src/fio/parse.c 00:14:53.104 1 8 libtcmalloc_minimal.so 00:14:53.104 1 904 libcrypto.so 00:14:53.104 ----------------------------------------------------- 00:14:53.104 00:14:53.104 00:14:53.104 real 0m13.438s 00:14:53.104 user 0m6.588s 00:14:53.104 sys 0m6.401s 00:14:53.104 ************************************ 00:14:53.104 END TEST xnvme_fio_plugin 00:14:53.104 ************************************ 00:14:53.104 23:16:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.104 23:16:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:53.104 23:16:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:53.104 23:16:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:53.104 23:16:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:53.104 23:16:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:53.104 23:16:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.104 23:16:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.104 23:16:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.104 ************************************ 00:14:53.104 START TEST xnvme_rpc 00:14:53.104 ************************************ 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71364 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71364 00:14:53.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71364 ']' 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.104 23:16:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.104 [2024-11-25 23:16:25.306869] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:53.104 [2024-11-25 23:16:25.306998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71364 ] 00:14:53.104 [2024-11-25 23:16:25.466993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.366 [2024-11-25 23:16:25.602766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.939 xnvme_bdev 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:53.939 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71364 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71364 ']' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71364 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71364 00:14:54.202 killing process with pid 71364 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71364' 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71364 00:14:54.202 23:16:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71364 00:14:56.203 00:14:56.203 real 0m2.803s 00:14:56.203 user 0m2.854s 00:14:56.203 sys 0m0.434s 00:14:56.203 23:16:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.203 23:16:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.203 ************************************ 00:14:56.203 END TEST xnvme_rpc 00:14:56.203 ************************************ 00:14:56.203 23:16:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:56.203 23:16:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:56.203 23:16:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.203 23:16:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.203 ************************************ 00:14:56.203 START TEST xnvme_bdevperf 00:14:56.203 ************************************ 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:56.203 23:16:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.203 { 00:14:56.203 "subsystems": [ 00:14:56.203 { 00:14:56.203 "subsystem": "bdev", 00:14:56.203 "config": [ 00:14:56.203 { 00:14:56.203 "params": { 00:14:56.203 "io_mechanism": "io_uring_cmd", 00:14:56.203 "conserve_cpu": true, 00:14:56.203 "filename": "/dev/ng0n1", 00:14:56.203 "name": "xnvme_bdev" 00:14:56.203 }, 00:14:56.203 "method": "bdev_xnvme_create" 00:14:56.203 }, 00:14:56.203 { 00:14:56.204 "method": "bdev_wait_for_examine" 00:14:56.204 } 00:14:56.204 ] 00:14:56.204 } 00:14:56.204 ] 00:14:56.204 } 00:14:56.204 [2024-11-25 23:16:28.169797] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:14:56.204 [2024-11-25 23:16:28.169939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:14:56.204 [2024-11-25 23:16:28.335677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.204 [2024-11-25 23:16:28.464845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.465 Running I/O for 5 seconds... 00:14:58.803 38960.00 IOPS, 152.19 MiB/s [2024-11-25T23:16:32.113Z] 39446.00 IOPS, 154.09 MiB/s [2024-11-25T23:16:33.053Z] 38414.00 IOPS, 150.05 MiB/s [2024-11-25T23:16:33.996Z] 37786.25 IOPS, 147.60 MiB/s 00:15:01.627 Latency(us) 00:15:01.627 [2024-11-25T23:16:33.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.627 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:01.627 xnvme_bdev : 5.00 38696.31 151.16 0.00 0.00 1649.79 683.72 4083.40 00:15:01.627 [2024-11-25T23:16:33.996Z] =================================================================================================================== 00:15:01.627 [2024-11-25T23:16:33.996Z] Total : 38696.31 151.16 0.00 0.00 1649.79 683.72 4083.40 00:15:02.199 23:16:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.199 23:16:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:02.199 23:16:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.199 23:16:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.199 23:16:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.199 { 00:15:02.199 "subsystems": [ 00:15:02.199 { 00:15:02.199 "subsystem": "bdev", 00:15:02.199 "config": [ 00:15:02.199 { 00:15:02.199 "params": { 00:15:02.199 "io_mechanism": "io_uring_cmd", 00:15:02.199 "conserve_cpu": true, 00:15:02.199 "filename": "/dev/ng0n1", 00:15:02.199 "name": "xnvme_bdev" 00:15:02.199 }, 00:15:02.199 "method": "bdev_xnvme_create" 00:15:02.199 }, 00:15:02.199 { 00:15:02.199 "method": "bdev_wait_for_examine" 00:15:02.199 } 00:15:02.199 ] 00:15:02.199 } 00:15:02.199 ] 00:15:02.199 } 00:15:02.459 [2024-11-25 23:16:34.578418] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:02.459 [2024-11-25 23:16:34.578766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71509 ] 00:15:02.459 [2024-11-25 23:16:34.740493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.719 [2024-11-25 23:16:34.862593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.980 Running I/O for 5 seconds... 00:15:04.865 39095.00 IOPS, 152.71 MiB/s [2024-11-25T23:16:38.177Z] 39472.50 IOPS, 154.19 MiB/s [2024-11-25T23:16:39.563Z] 39570.67 IOPS, 154.57 MiB/s [2024-11-25T23:16:40.509Z] 39466.50 IOPS, 154.17 MiB/s [2024-11-25T23:16:40.509Z] 33290.20 IOPS, 130.04 MiB/s 00:15:08.140 Latency(us) 00:15:08.140 [2024-11-25T23:16:40.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.140 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:08.140 xnvme_bdev : 5.02 33162.58 129.54 0.00 0.00 1920.86 71.29 36296.86 00:15:08.140 [2024-11-25T23:16:40.509Z] =================================================================================================================== 00:15:08.140 [2024-11-25T23:16:40.509Z] Total : 33162.58 129.54 0.00 0.00 1920.86 71.29 36296.86 00:15:08.713 23:16:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.713 23:16:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:08.714 23:16:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.714 23:16:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.714 23:16:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.714 { 00:15:08.714 "subsystems": [ 00:15:08.714 { 00:15:08.714 "subsystem": "bdev", 00:15:08.714 "config": [ 00:15:08.714 { 00:15:08.714 "params": { 00:15:08.714 "io_mechanism": "io_uring_cmd", 00:15:08.714 "conserve_cpu": true, 00:15:08.714 "filename": "/dev/ng0n1", 00:15:08.714 "name": "xnvme_bdev" 00:15:08.714 }, 00:15:08.714 "method": "bdev_xnvme_create" 00:15:08.714 }, 00:15:08.714 { 00:15:08.714 "method": "bdev_wait_for_examine" 00:15:08.714 } 00:15:08.714 ] 00:15:08.714 } 00:15:08.714 ] 00:15:08.714 } 00:15:08.714 [2024-11-25 23:16:41.002677] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:08.714 [2024-11-25 23:16:41.002792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71589 ] 00:15:08.975 [2024-11-25 23:16:41.163041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.975 [2024-11-25 23:16:41.259573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.235 Running I/O for 5 seconds... 00:15:11.615 79872.00 IOPS, 312.00 MiB/s [2024-11-25T23:16:44.596Z] 79840.00 IOPS, 311.88 MiB/s [2024-11-25T23:16:45.982Z] 79765.33 IOPS, 311.58 MiB/s [2024-11-25T23:16:46.549Z] 79872.00 IOPS, 312.00 MiB/s [2024-11-25T23:16:46.808Z] 83097.60 IOPS, 324.60 MiB/s 00:15:14.439 Latency(us) 00:15:14.439 [2024-11-25T23:16:46.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.439 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:14.439 xnvme_bdev : 5.00 83071.26 324.50 0.00 0.00 767.01 374.94 2659.25 00:15:14.439 [2024-11-25T23:16:46.808Z] =================================================================================================================== 00:15:14.439 [2024-11-25T23:16:46.808Z] Total : 83071.26 324.50 0.00 0.00 767.01 374.94 2659.25 00:15:15.006 23:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.006 23:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:15.006 23:16:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.006 23:16:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.006 23:16:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.006 { 00:15:15.006 "subsystems": [ 00:15:15.006 { 00:15:15.006 "subsystem": "bdev", 00:15:15.006 "config": [ 00:15:15.006 { 00:15:15.006 "params": { 00:15:15.006 "io_mechanism": "io_uring_cmd", 00:15:15.006 "conserve_cpu": true, 00:15:15.006 "filename": "/dev/ng0n1", 00:15:15.006 "name": "xnvme_bdev" 00:15:15.006 }, 00:15:15.006 "method": "bdev_xnvme_create" 00:15:15.006 }, 00:15:15.006 { 00:15:15.006 "method": "bdev_wait_for_examine" 00:15:15.006 } 00:15:15.006 ] 00:15:15.006 } 00:15:15.006 ] 00:15:15.006 } 00:15:15.006 [2024-11-25 23:16:47.157321] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:15.006 [2024-11-25 23:16:47.157434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71659 ] 00:15:15.006 [2024-11-25 23:16:47.311631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.264 [2024-11-25 23:16:47.390564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.264 Running I/O for 5 seconds... 00:15:17.580 61982.00 IOPS, 242.12 MiB/s [2024-11-25T23:16:50.888Z] 58077.00 IOPS, 226.86 MiB/s [2024-11-25T23:16:51.827Z] 54776.67 IOPS, 213.97 MiB/s [2024-11-25T23:16:52.767Z] 52271.50 IOPS, 204.19 MiB/s [2024-11-25T23:16:52.767Z] 50631.20 IOPS, 197.78 MiB/s 00:15:20.398 Latency(us) 00:15:20.398 [2024-11-25T23:16:52.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.398 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:20.398 xnvme_bdev : 5.00 50607.68 197.69 0.00 0.00 1259.64 103.19 17442.66 00:15:20.398 [2024-11-25T23:16:52.767Z] =================================================================================================================== 00:15:20.398 [2024-11-25T23:16:52.767Z] Total : 50607.68 197.69 0.00 0.00 1259.64 103.19 17442.66 00:15:21.340 00:15:21.340 real 0m25.271s 00:15:21.340 user 0m16.274s 00:15:21.340 sys 0m6.809s 00:15:21.340 23:16:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.340 23:16:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.340 ************************************ 00:15:21.340 END TEST xnvme_bdevperf 00:15:21.340 ************************************ 00:15:21.340 23:16:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:21.340 23:16:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.340 23:16:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.340 23:16:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.340 ************************************ 00:15:21.340 START TEST xnvme_fio_plugin 00:15:21.340 ************************************ 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.340 23:16:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.340 { 00:15:21.340 "subsystems": [ 00:15:21.340 { 00:15:21.340 "subsystem": "bdev", 00:15:21.340 "config": [ 00:15:21.340 { 00:15:21.340 "params": { 00:15:21.340 "io_mechanism": "io_uring_cmd", 00:15:21.340 "conserve_cpu": true, 00:15:21.340 "filename": "/dev/ng0n1", 00:15:21.340 "name": "xnvme_bdev" 00:15:21.340 }, 00:15:21.340 "method": "bdev_xnvme_create" 00:15:21.340 }, 00:15:21.340 { 00:15:21.340 "method": "bdev_wait_for_examine" 00:15:21.340 } 00:15:21.340 ] 00:15:21.340 } 00:15:21.340 ] 00:15:21.340 } 00:15:21.340 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:21.340 fio-3.35 00:15:21.340 Starting 1 thread 00:15:27.929 00:15:27.929 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71777: Mon Nov 25 23:16:59 2024 00:15:27.929 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(693MiB/5002msec) 00:15:27.929 slat (nsec): min=2720, max=70521, avg=3745.54, stdev=2389.23 00:15:27.929 clat (usec): min=793, max=4886, avg=1651.70, stdev=336.23 00:15:27.929 lat (usec): min=796, max=4907, avg=1655.44, stdev=336.80 00:15:27.929 clat percentiles (usec): 00:15:27.929 | 1.00th=[ 1020], 5.00th=[ 1172], 10.00th=[ 1270], 20.00th=[ 1385], 00:15:27.929 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1614], 60.00th=[ 1696], 00:15:27.929 | 70.00th=[ 1795], 80.00th=[ 1909], 90.00th=[ 2089], 95.00th=[ 2245], 00:15:27.929 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3458], 99.95th=[ 3916], 00:15:27.929 | 99.99th=[ 4817] 00:15:27.929 bw ( KiB/s): min=134656, max=157184, per=98.76%, avg=140003.56, stdev=6749.43, samples=9 00:15:27.929 iops : min=33664, max=39296, avg=35000.89, stdev=1687.36, samples=9 00:15:27.929 lat (usec) : 1000=0.73% 00:15:27.929 lat (msec) : 2=85.15%, 4=14.07%, 10=0.05% 00:15:27.929 cpu : usr=58.93%, sys=37.83%, ctx=7, majf=0, minf=762 00:15:27.929 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:27.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.929 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:27.929 issued rwts: total=177280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.929 00:15:27.929 Run status group 0 (all jobs): 00:15:27.929 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=693MiB (726MB), run=5002-5002msec 00:15:27.929 ----------------------------------------------------- 00:15:27.929 Suppressions used: 00:15:27.929 count bytes template 00:15:27.929 1 11 /usr/src/fio/parse.c 00:15:27.929 1 8 libtcmalloc_minimal.so 00:15:27.929 1 904 libcrypto.so 00:15:27.929 ----------------------------------------------------- 00:15:27.929 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.929 23:17:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.929 { 00:15:27.929 "subsystems": [ 00:15:27.929 { 00:15:27.929 "subsystem": "bdev", 00:15:27.929 "config": [ 00:15:27.929 { 00:15:27.929 "params": { 00:15:27.929 "io_mechanism": "io_uring_cmd", 00:15:27.929 "conserve_cpu": true, 00:15:27.929 "filename": "/dev/ng0n1", 00:15:27.929 "name": "xnvme_bdev" 00:15:27.929 }, 00:15:27.929 "method": "bdev_xnvme_create" 00:15:27.929 }, 00:15:27.929 { 00:15:27.929 "method": "bdev_wait_for_examine" 00:15:27.929 } 00:15:27.929 ] 00:15:27.929 } 00:15:27.929 ] 00:15:27.929 } 00:15:28.190 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:28.190 fio-3.35 00:15:28.190 Starting 1 thread 00:15:34.827 00:15:34.827 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71862: Mon Nov 25 23:17:06 2024 00:15:34.827 write: IOPS=39.2k, BW=153MiB/s (161MB/s)(766MiB/5002msec); 0 zone resets 00:15:34.827 slat (usec): min=2, max=289, avg= 3.93, stdev= 2.48 00:15:34.827 clat (usec): min=563, max=6711, avg=1473.24, stdev=319.30 00:15:34.827 lat (usec): min=566, max=6714, avg=1477.18, stdev=319.82 00:15:34.827 clat percentiles (usec): 00:15:34.827 | 1.00th=[ 906], 5.00th=[ 1029], 10.00th=[ 1106], 20.00th=[ 1221], 00:15:34.827 | 30.00th=[ 1303], 40.00th=[ 1369], 50.00th=[ 1450], 60.00th=[ 1516], 00:15:34.827 | 70.00th=[ 1598], 80.00th=[ 1696], 90.00th=[ 1860], 95.00th=[ 1991], 00:15:34.827 | 99.00th=[ 2376], 99.50th=[ 2606], 99.90th=[ 3818], 99.95th=[ 3982], 00:15:34.827 | 99.99th=[ 5080] 00:15:34.827 bw ( KiB/s): min=143616, max=170080, per=100.00%, avg=157754.67, stdev=12061.81, samples=9 00:15:34.827 iops : min=35902, max=42520, avg=39438.67, stdev=3015.51, samples=9 00:15:34.827 lat (usec) : 750=0.07%, 1000=3.58% 00:15:34.827 lat (msec) : 2=91.46%, 4=4.84%, 10=0.05% 00:15:34.827 cpu : usr=58.07%, sys=37.75%, ctx=35, majf=0, minf=762 00:15:34.827 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:15:34.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.827 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:34.827 issued rwts: total=0,196206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.827 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:34.827 00:15:34.827 Run status group 0 (all jobs): 00:15:34.827 WRITE: bw=153MiB/s (161MB/s), 153MiB/s-153MiB/s (161MB/s-161MB/s), io=766MiB (804MB), run=5002-5002msec 00:15:34.827 ----------------------------------------------------- 00:15:34.827 Suppressions used: 00:15:34.827 count bytes template 00:15:34.827 1 11 /usr/src/fio/parse.c 00:15:34.827 1 8 libtcmalloc_minimal.so 00:15:34.827 1 904 libcrypto.so 00:15:34.827 ----------------------------------------------------- 00:15:34.827 00:15:34.827 00:15:34.827 real 0m13.592s 00:15:34.827 user 0m8.549s 00:15:34.827 sys 0m4.352s 00:15:34.827 23:17:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.827 ************************************ 00:15:34.827 END TEST xnvme_fio_plugin 00:15:34.827 ************************************ 00:15:34.827 23:17:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.827 Process with pid 71364 is not found 00:15:34.827 23:17:07 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71364 00:15:34.827 23:17:07 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71364 ']' 00:15:34.827 23:17:07 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71364 00:15:34.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71364) - No such process 00:15:34.827 23:17:07 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71364 is not found' 00:15:34.827 00:15:34.827 real 3m28.263s 00:15:34.827 user 1m59.938s 00:15:34.827 sys 1m13.708s 00:15:34.827 23:17:07 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.827 ************************************ 00:15:34.827 END TEST nvme_xnvme 00:15:34.827 ************************************ 00:15:34.827 23:17:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.827 23:17:07 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:34.827 23:17:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.827 23:17:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.827 23:17:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.827 ************************************ 00:15:34.827 START TEST blockdev_xnvme 00:15:34.827 ************************************ 00:15:34.827 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:35.089 * Looking for test storage... 00:15:35.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.089 23:17:07 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:35.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.089 --rc genhtml_branch_coverage=1 00:15:35.089 --rc genhtml_function_coverage=1 00:15:35.089 --rc genhtml_legend=1 00:15:35.089 --rc geninfo_all_blocks=1 00:15:35.089 --rc geninfo_unexecuted_blocks=1 00:15:35.089 00:15:35.089 ' 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:35.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.089 --rc genhtml_branch_coverage=1 00:15:35.089 --rc genhtml_function_coverage=1 00:15:35.089 --rc genhtml_legend=1 00:15:35.089 --rc geninfo_all_blocks=1 00:15:35.089 --rc geninfo_unexecuted_blocks=1 00:15:35.089 00:15:35.089 ' 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:35.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.089 --rc genhtml_branch_coverage=1 00:15:35.089 --rc genhtml_function_coverage=1 00:15:35.089 --rc genhtml_legend=1 00:15:35.089 --rc geninfo_all_blocks=1 00:15:35.089 --rc geninfo_unexecuted_blocks=1 00:15:35.089 00:15:35.089 ' 00:15:35.089 23:17:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:35.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.089 --rc genhtml_branch_coverage=1 00:15:35.089 --rc genhtml_function_coverage=1 00:15:35.089 --rc genhtml_legend=1 00:15:35.089 --rc geninfo_all_blocks=1 00:15:35.089 --rc geninfo_unexecuted_blocks=1 00:15:35.089 00:15:35.089 ' 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:35.089 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:35.090 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71995 00:15:35.090 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:35.090 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71995 00:15:35.090 23:17:07 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71995 ']' 00:15:35.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.090 23:17:07 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.090 23:17:07 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.090 23:17:07 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.090 23:17:07 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.090 23:17:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.090 23:17:07 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:35.090 [2024-11-25 23:17:07.396781] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:35.090 [2024-11-25 23:17:07.396946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71995 ] 00:15:35.350 [2024-11-25 23:17:07.558839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.350 [2024-11-25 23:17:07.682046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.296 23:17:08 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.296 23:17:08 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:36.296 23:17:08 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:36.296 23:17:08 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:36.296 23:17:08 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:36.296 23:17:08 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:36.296 23:17:08 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:36.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:37.129 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:37.129 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:37.129 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:37.129 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.129 23:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.129 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:37.129 nvme0n1 00:15:37.129 nvme0n2 00:15:37.129 nvme0n3 00:15:37.129 nvme1n1 00:15:37.390 nvme2n1 00:15:37.390 nvme3n1 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.390 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.390 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:37.390 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.390 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.390 23:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dc952816-3255-4870-88a2-e3b1b1b27f1c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc952816-3255-4870-88a2-e3b1b1b27f1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "fb9261ea-cbcd-4807-8b8e-2e5b69d99a6f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fb9261ea-cbcd-4807-8b8e-2e5b69d99a6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6771aced-8396-4a65-a69f-aed44059d24c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6771aced-8396-4a65-a69f-aed44059d24c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0679a510-fa85-496d-91de-64a6533af6ca"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0679a510-fa85-496d-91de-64a6533af6ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1b8149f9-2bf5-4506-a4c7-7f3fda4d2b10"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1b8149f9-2bf5-4506-a4c7-7f3fda4d2b10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a15de700-3867-49b6-a751-2b62f09c73eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a15de700-3867-49b6-a751-2b62f09c73eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:37.391 23:17:09 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71995 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71995 ']' 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71995 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71995 00:15:37.391 killing process with pid 71995 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71995' 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71995 00:15:37.391 23:17:09 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71995 00:15:39.307 23:17:11 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:39.307 23:17:11 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:39.307 23:17:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:39.307 23:17:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.307 23:17:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.307 ************************************ 00:15:39.307 START TEST bdev_hello_world 00:15:39.307 ************************************ 00:15:39.307 23:17:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:39.307 [2024-11-25 23:17:11.425122] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:39.307 [2024-11-25 23:17:11.425458] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72275 ] 00:15:39.307 [2024-11-25 23:17:11.581718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.568 [2024-11-25 23:17:11.708595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.829 [2024-11-25 23:17:12.111482] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:39.829 [2024-11-25 23:17:12.111549] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:39.829 [2024-11-25 23:17:12.111568] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:39.829 [2024-11-25 23:17:12.113735] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:39.829 [2024-11-25 23:17:12.114377] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:39.829 [2024-11-25 23:17:12.114407] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:39.829 [2024-11-25 23:17:12.115341] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:39.829 00:15:39.829 [2024-11-25 23:17:12.115411] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:40.774 00:15:40.774 real 0m1.549s 00:15:40.774 user 0m1.169s 00:15:40.774 sys 0m0.231s 00:15:40.774 23:17:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.774 23:17:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 ************************************ 00:15:40.774 END TEST bdev_hello_world 00:15:40.774 ************************************ 00:15:40.774 23:17:12 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:40.774 23:17:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.774 23:17:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.774 23:17:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 ************************************ 00:15:40.774 START TEST bdev_bounds 00:15:40.774 ************************************ 00:15:40.774 Process bdevio pid: 72317 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72317 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72317' 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72317 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72317 ']' 00:15:40.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.774 23:17:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:40.774 [2024-11-25 23:17:13.048228] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:40.774 [2024-11-25 23:17:13.048385] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72317 ] 00:15:41.035 [2024-11-25 23:17:13.213236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.035 [2024-11-25 23:17:13.339582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.035 [2024-11-25 23:17:13.339895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.035 [2024-11-25 23:17:13.339942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.608 23:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.608 23:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:41.608 23:17:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:41.870 I/O targets: 00:15:41.870 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.870 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.870 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:41.870 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:41.870 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:41.870 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:41.870 00:15:41.870 00:15:41.870 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.870 http://cunit.sourceforge.net/ 00:15:41.870 00:15:41.870 00:15:41.870 Suite: bdevio tests on: nvme3n1 00:15:41.870 Test: blockdev write read block ...passed 00:15:41.870 Test: blockdev write zeroes read block ...passed 00:15:41.870 Test: blockdev write zeroes read no split ...passed 00:15:41.870 Test: blockdev write zeroes read split ...passed 00:15:41.870 Test: blockdev write zeroes read split partial ...passed 00:15:41.870 Test: blockdev reset ...passed 00:15:41.870 Test: blockdev write read 8 blocks ...passed 00:15:41.870 Test: blockdev write read size > 128k ...passed 00:15:41.870 Test: blockdev write read invalid size ...passed 00:15:41.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:41.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:41.870 Test: blockdev write read max offset ...passed 00:15:41.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.870 Test: blockdev writev readv 8 blocks ...passed 00:15:41.871 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.871 Test: blockdev writev readv block ...passed 00:15:41.871 Test: blockdev writev readv size > 128k ...passed 00:15:41.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.871 Test: blockdev comparev and writev ...passed 00:15:41.871 Test: blockdev nvme passthru rw ...passed 00:15:41.871 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.871 Test: blockdev nvme admin passthru ...passed 00:15:41.871 Test: blockdev copy ...passed 00:15:41.871 Suite: bdevio tests on: nvme2n1 00:15:41.871 Test: blockdev write read block ...passed 00:15:41.871 Test: blockdev write zeroes read block ...passed 00:15:41.871 Test: blockdev write zeroes read no split ...passed 00:15:41.871 Test: blockdev write zeroes read split ...passed 00:15:41.871 Test: blockdev write zeroes read split partial ...passed 00:15:41.871 Test: blockdev reset ...passed 00:15:41.871 Test: blockdev write read 8 blocks ...passed 00:15:41.871 Test: blockdev write read size > 128k ...passed 00:15:41.871 Test: blockdev write read invalid size ...passed 00:15:41.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:41.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:41.871 Test: blockdev write read max offset ...passed 00:15:41.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.871 Test: blockdev writev readv 8 blocks ...passed 00:15:41.871 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.871 Test: blockdev writev readv block ...passed 00:15:41.871 Test: blockdev writev readv size > 128k ...passed 00:15:41.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.871 Test: blockdev comparev and writev ...passed 00:15:41.871 Test: blockdev nvme passthru rw ...passed 00:15:41.871 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.871 Test: blockdev nvme admin passthru ...passed 00:15:41.871 Test: blockdev copy ...passed 00:15:41.871 Suite: bdevio tests on: nvme1n1 00:15:41.871 Test: blockdev write read block ...passed 00:15:41.871 Test: blockdev write zeroes read block ...passed 00:15:41.871 Test: blockdev write zeroes read no split ...passed 00:15:41.871 Test: blockdev write zeroes read split ...passed 00:15:41.871 Test: blockdev write zeroes read split partial ...passed 00:15:41.871 Test: blockdev reset ...passed 00:15:41.871 Test: blockdev write read 8 blocks ...passed 00:15:42.133 Test: blockdev write read size > 128k ...passed 00:15:42.133 Test: blockdev write read invalid size ...passed 00:15:42.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.133 Test: blockdev write read max offset ...passed 00:15:42.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.133 Test: blockdev writev readv 8 blocks ...passed 00:15:42.133 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.133 Test: blockdev writev readv block ...passed 00:15:42.133 Test: blockdev writev readv size > 128k ...passed 00:15:42.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.133 Test: blockdev comparev and writev ...passed 00:15:42.133 Test: blockdev nvme passthru rw ...passed 00:15:42.133 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.133 Test: blockdev nvme admin passthru ...passed 00:15:42.133 Test: blockdev copy ...passed 00:15:42.133 Suite: bdevio tests on: nvme0n3 00:15:42.133 Test: blockdev write read block ...passed 00:15:42.133 Test: blockdev write zeroes read block ...passed 00:15:42.133 Test: blockdev write zeroes read no split ...passed 00:15:42.133 Test: blockdev write zeroes read split ...passed 00:15:42.133 Test: blockdev write zeroes read split partial ...passed 00:15:42.133 Test: blockdev reset ...passed 00:15:42.133 Test: blockdev write read 8 blocks ...passed 00:15:42.133 Test: blockdev write read size > 128k ...passed 00:15:42.133 Test: blockdev write read invalid size ...passed 00:15:42.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.133 Test: blockdev write read max offset ...passed 00:15:42.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.133 Test: blockdev writev readv 8 blocks ...passed 00:15:42.133 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.133 Test: blockdev writev readv block ...passed 00:15:42.133 Test: blockdev writev readv size > 128k ...passed 00:15:42.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.133 Test: blockdev comparev and writev ...passed 00:15:42.133 Test: blockdev nvme passthru rw ...passed 00:15:42.133 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.133 Test: blockdev nvme admin passthru ...passed 00:15:42.133 Test: blockdev copy ...passed 00:15:42.133 Suite: bdevio tests on: nvme0n2 00:15:42.133 Test: blockdev write read block ...passed 00:15:42.133 Test: blockdev write zeroes read block ...passed 00:15:42.133 Test: blockdev write zeroes read no split ...passed 00:15:42.133 Test: blockdev write zeroes read split ...passed 00:15:42.133 Test: blockdev write zeroes read split partial ...passed 00:15:42.133 Test: blockdev reset ...passed 00:15:42.133 Test: blockdev write read 8 blocks ...passed 00:15:42.133 Test: blockdev write read size > 128k ...passed 00:15:42.133 Test: blockdev write read invalid size ...passed 00:15:42.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.133 Test: blockdev write read max offset ...passed 00:15:42.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.133 Test: blockdev writev readv 8 blocks ...passed 00:15:42.133 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.133 Test: blockdev writev readv block ...passed 00:15:42.133 Test: blockdev writev readv size > 128k ...passed 00:15:42.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.133 Test: blockdev comparev and writev ...passed 00:15:42.133 Test: blockdev nvme passthru rw ...passed 00:15:42.133 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.133 Test: blockdev nvme admin passthru ...passed 00:15:42.133 Test: blockdev copy ...passed 00:15:42.133 Suite: bdevio tests on: nvme0n1 00:15:42.133 Test: blockdev write read block ...passed 00:15:42.133 Test: blockdev write zeroes read block ...passed 00:15:42.133 Test: blockdev write zeroes read no split ...passed 00:15:42.133 Test: blockdev write zeroes read split ...passed 00:15:42.133 Test: blockdev write zeroes read split partial ...passed 00:15:42.133 Test: blockdev reset ...passed 00:15:42.133 Test: blockdev write read 8 blocks ...passed 00:15:42.133 Test: blockdev write read size > 128k ...passed 00:15:42.133 Test: blockdev write read invalid size ...passed 00:15:42.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:42.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:42.133 Test: blockdev write read max offset ...passed 00:15:42.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:42.133 Test: blockdev writev readv 8 blocks ...passed 00:15:42.133 Test: blockdev writev readv 30 x 1block ...passed 00:15:42.133 Test: blockdev writev readv block ...passed 00:15:42.133 Test: blockdev writev readv size > 128k ...passed 00:15:42.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:42.133 Test: blockdev comparev and writev ...passed 00:15:42.133 Test: blockdev nvme passthru rw ...passed 00:15:42.133 Test: blockdev nvme passthru vendor specific ...passed 00:15:42.133 Test: blockdev nvme admin passthru ...passed 00:15:42.133 Test: blockdev copy ...passed 00:15:42.133 00:15:42.133 Run Summary: Type Total Ran Passed Failed Inactive 00:15:42.133 suites 6 6 n/a 0 0 00:15:42.133 tests 138 138 138 0 0 00:15:42.134 asserts 780 780 780 0 n/a 00:15:42.134 00:15:42.134 Elapsed time = 1.260 seconds 00:15:42.134 0 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72317 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72317 ']' 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72317 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72317 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72317' 00:15:42.395 killing process with pid 72317 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72317 00:15:42.395 23:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72317 00:15:42.966 23:17:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:42.966 00:15:42.966 real 0m2.223s 00:15:42.966 user 0m5.356s 00:15:42.966 sys 0m0.405s 00:15:42.966 23:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.966 23:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:42.966 ************************************ 00:15:42.966 END TEST bdev_bounds 00:15:42.966 ************************************ 00:15:42.966 23:17:15 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:42.966 23:17:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:42.966 23:17:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.966 23:17:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.966 ************************************ 00:15:42.966 START TEST bdev_nbd 00:15:42.966 ************************************ 00:15:42.966 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:42.966 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:42.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72375 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72375 /var/tmp/spdk-nbd.sock 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72375 ']' 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.967 23:17:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:43.228 [2024-11-25 23:17:15.330505] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:15:43.228 [2024-11-25 23:17:15.330623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.228 [2024-11-25 23:17:15.489020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.228 [2024-11-25 23:17:15.580426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.171 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.172 1+0 records in 00:15:44.172 1+0 records out 00:15:44.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099831 s, 4.1 MB/s 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.172 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.432 1+0 records in 00:15:44.432 1+0 records out 00:15:44.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853353 s, 4.8 MB/s 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.432 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.693 1+0 records in 00:15:44.693 1+0 records out 00:15:44.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106562 s, 3.8 MB/s 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.693 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.694 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.694 23:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.694 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.694 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.694 23:17:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.955 1+0 records in 00:15:44.955 1+0 records out 00:15:44.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563494 s, 7.3 MB/s 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.955 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.217 1+0 records in 00:15:45.217 1+0 records out 00:15:45.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100394 s, 4.1 MB/s 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.217 1+0 records in 00:15:45.217 1+0 records out 00:15:45.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130003 s, 3.2 MB/s 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:45.217 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd0", 00:15:45.478 "bdev_name": "nvme0n1" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd1", 00:15:45.478 "bdev_name": "nvme0n2" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd2", 00:15:45.478 "bdev_name": "nvme0n3" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd3", 00:15:45.478 "bdev_name": "nvme1n1" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd4", 00:15:45.478 "bdev_name": "nvme2n1" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd5", 00:15:45.478 "bdev_name": "nvme3n1" 00:15:45.478 } 00:15:45.478 ]' 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd0", 00:15:45.478 "bdev_name": "nvme0n1" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd1", 00:15:45.478 "bdev_name": "nvme0n2" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd2", 00:15:45.478 "bdev_name": "nvme0n3" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd3", 00:15:45.478 "bdev_name": "nvme1n1" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd4", 00:15:45.478 "bdev_name": "nvme2n1" 00:15:45.478 }, 00:15:45.478 { 00:15:45.478 "nbd_device": "/dev/nbd5", 00:15:45.478 "bdev_name": "nvme3n1" 00:15:45.478 } 00:15:45.478 ]' 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.478 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.740 23:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.001 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.262 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:46.522 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:46.522 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:46.522 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:46.522 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.523 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.523 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:46.523 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.523 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.523 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.523 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.782 23:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.782 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:47.041 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.042 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:47.302 /dev/nbd0 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.302 1+0 records in 00:15:47.302 1+0 records out 00:15:47.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992106 s, 4.1 MB/s 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.302 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:47.563 /dev/nbd1 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.563 1+0 records in 00:15:47.563 1+0 records out 00:15:47.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104213 s, 3.9 MB/s 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.563 23:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:47.824 /dev/nbd10 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.824 1+0 records in 00:15:47.824 1+0 records out 00:15:47.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935952 s, 4.4 MB/s 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.824 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:48.085 /dev/nbd11 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.085 1+0 records in 00:15:48.085 1+0 records out 00:15:48.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108675 s, 3.8 MB/s 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.085 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:48.345 /dev/nbd12 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.345 1+0 records in 00:15:48.345 1+0 records out 00:15:48.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927496 s, 4.4 MB/s 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.345 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:48.605 /dev/nbd13 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.605 1+0 records in 00:15:48.605 1+0 records out 00:15:48.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806009 s, 5.1 MB/s 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd0", 00:15:48.605 "bdev_name": "nvme0n1" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd1", 00:15:48.605 "bdev_name": "nvme0n2" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd10", 00:15:48.605 "bdev_name": "nvme0n3" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd11", 00:15:48.605 "bdev_name": "nvme1n1" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd12", 00:15:48.605 "bdev_name": "nvme2n1" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd13", 00:15:48.605 "bdev_name": "nvme3n1" 00:15:48.605 } 00:15:48.605 ]' 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd0", 00:15:48.605 "bdev_name": "nvme0n1" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd1", 00:15:48.605 "bdev_name": "nvme0n2" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd10", 00:15:48.605 "bdev_name": "nvme0n3" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd11", 00:15:48.605 "bdev_name": "nvme1n1" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd12", 00:15:48.605 "bdev_name": "nvme2n1" 00:15:48.605 }, 00:15:48.605 { 00:15:48.605 "nbd_device": "/dev/nbd13", 00:15:48.605 "bdev_name": "nvme3n1" 00:15:48.605 } 00:15:48.605 ]' 00:15:48.605 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:48.866 /dev/nbd1 00:15:48.866 /dev/nbd10 00:15:48.866 /dev/nbd11 00:15:48.866 /dev/nbd12 00:15:48.866 /dev/nbd13' 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:48.866 /dev/nbd1 00:15:48.866 /dev/nbd10 00:15:48.866 /dev/nbd11 00:15:48.866 /dev/nbd12 00:15:48.866 /dev/nbd13' 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:48.866 23:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:48.866 256+0 records in 00:15:48.866 256+0 records out 00:15:48.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128776 s, 81.4 MB/s 00:15:48.866 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.866 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:49.128 256+0 records in 00:15:49.128 256+0 records out 00:15:49.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225939 s, 4.6 MB/s 00:15:49.128 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.128 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:49.128 256+0 records in 00:15:49.128 256+0 records out 00:15:49.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221003 s, 4.7 MB/s 00:15:49.128 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.128 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:49.390 256+0 records in 00:15:49.390 256+0 records out 00:15:49.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203795 s, 5.1 MB/s 00:15:49.390 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.390 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:49.651 256+0 records in 00:15:49.651 256+0 records out 00:15:49.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135377 s, 7.7 MB/s 00:15:49.651 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.651 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:49.651 256+0 records in 00:15:49.651 256+0 records out 00:15:49.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167798 s, 6.2 MB/s 00:15:49.651 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:49.651 23:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:50.008 256+0 records in 00:15:50.008 256+0 records out 00:15:50.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249458 s, 4.2 MB/s 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.008 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.270 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:50.531 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.532 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.792 23:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.792 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.053 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.314 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:51.575 23:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:51.836 malloc_lvol_verify 00:15:51.836 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:51.836 47c0f814-b682-4331-a208-11d3c3e38417 00:15:52.098 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:52.098 22508df3-1fc2-43d2-bb64-90fdbe716a54 00:15:52.098 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:52.361 /dev/nbd0 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:52.361 mke2fs 1.47.0 (5-Feb-2023) 00:15:52.361 Discarding device blocks: 0/4096 done 00:15:52.361 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:52.361 00:15:52.361 Allocating group tables: 0/1 done 00:15:52.361 Writing inode tables: 0/1 done 00:15:52.361 Creating journal (1024 blocks): done 00:15:52.361 Writing superblocks and filesystem accounting information: 0/1 done 00:15:52.361 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.361 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72375 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72375 ']' 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72375 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72375 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.622 killing process with pid 72375 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72375' 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72375 00:15:52.622 23:17:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72375 00:15:53.195 23:17:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:53.195 00:15:53.195 real 0m10.251s 00:15:53.195 user 0m13.997s 00:15:53.195 sys 0m3.462s 00:15:53.195 23:17:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.195 23:17:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:53.195 ************************************ 00:15:53.195 END TEST bdev_nbd 00:15:53.195 ************************************ 00:15:53.457 23:17:25 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:53.457 23:17:25 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:53.457 23:17:25 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:53.457 23:17:25 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:53.457 23:17:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.457 23:17:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.457 23:17:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.457 ************************************ 00:15:53.457 START TEST bdev_fio 00:15:53.457 ************************************ 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:53.457 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:53.457 ************************************ 00:15:53.457 START TEST bdev_fio_rw_verify 00:15:53.457 ************************************ 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:53.457 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:53.458 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:53.458 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:53.458 23:17:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:53.719 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.719 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.719 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.719 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.719 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.719 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:53.719 fio-3.35 00:15:53.719 Starting 6 threads 00:16:05.955 00:16:05.955 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72778: Mon Nov 25 23:17:36 2024 00:16:05.955 read: IOPS=15.1k, BW=59.0MiB/s (61.8MB/s)(590MiB/10002msec) 00:16:05.955 slat (usec): min=2, max=2358, avg= 5.63, stdev=13.91 00:16:05.955 clat (usec): min=85, max=8565, avg=1256.30, stdev=711.51 00:16:05.955 lat (usec): min=89, max=8569, avg=1261.93, stdev=711.88 00:16:05.955 clat percentiles (usec): 00:16:05.955 | 50.000th=[ 1172], 99.000th=[ 3392], 99.900th=[ 4817], 99.990th=[ 6128], 00:16:05.955 | 99.999th=[ 8586] 00:16:05.955 write: IOPS=15.4k, BW=60.3MiB/s (63.2MB/s)(603MiB/10002msec); 0 zone resets 00:16:05.955 slat (usec): min=12, max=3544, avg=41.83, stdev=142.14 00:16:05.955 clat (usec): min=86, max=7497, avg=1555.79, stdev=773.60 00:16:05.955 lat (usec): min=100, max=7521, avg=1597.62, stdev=786.02 00:16:05.955 clat percentiles (usec): 00:16:05.955 | 50.000th=[ 1450], 99.000th=[ 3884], 99.900th=[ 5211], 99.990th=[ 6259], 00:16:05.955 | 99.999th=[ 7504] 00:16:05.955 bw ( KiB/s): min=48986, max=75800, per=100.00%, avg=61812.53, stdev=1480.31, samples=114 00:16:05.955 iops : min=12242, max=18949, avg=15451.89, stdev=370.10, samples=114 00:16:05.955 lat (usec) : 100=0.01%, 250=2.43%, 500=6.62%, 750=9.94%, 1000=13.04% 00:16:05.955 lat (msec) : 2=49.16%, 4=18.24%, 10=0.57% 00:16:05.955 cpu : usr=41.48%, sys=30.47%, ctx=5248, majf=0, minf=15170 00:16:05.955 IO depths : 1=11.3%, 2=23.7%, 4=51.2%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.955 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.955 issued rwts: total=150984,154443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.955 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:05.955 00:16:05.955 Run status group 0 (all jobs): 00:16:05.955 READ: bw=59.0MiB/s (61.8MB/s), 59.0MiB/s-59.0MiB/s (61.8MB/s-61.8MB/s), io=590MiB (618MB), run=10002-10002msec 00:16:05.955 WRITE: bw=60.3MiB/s (63.2MB/s), 60.3MiB/s-60.3MiB/s (63.2MB/s-63.2MB/s), io=603MiB (633MB), run=10002-10002msec 00:16:05.955 ----------------------------------------------------- 00:16:05.955 Suppressions used: 00:16:05.955 count bytes template 00:16:05.955 6 48 /usr/src/fio/parse.c 00:16:05.955 3359 322464 /usr/src/fio/iolog.c 00:16:05.955 1 8 libtcmalloc_minimal.so 00:16:05.955 1 904 libcrypto.so 00:16:05.955 ----------------------------------------------------- 00:16:05.955 00:16:05.955 00:16:05.955 real 0m11.819s 00:16:05.955 user 0m26.293s 00:16:05.955 sys 0m18.554s 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:05.955 ************************************ 00:16:05.955 END TEST bdev_fio_rw_verify 00:16:05.955 ************************************ 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "dc952816-3255-4870-88a2-e3b1b1b27f1c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc952816-3255-4870-88a2-e3b1b1b27f1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "fb9261ea-cbcd-4807-8b8e-2e5b69d99a6f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fb9261ea-cbcd-4807-8b8e-2e5b69d99a6f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6771aced-8396-4a65-a69f-aed44059d24c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6771aced-8396-4a65-a69f-aed44059d24c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0679a510-fa85-496d-91de-64a6533af6ca"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0679a510-fa85-496d-91de-64a6533af6ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1b8149f9-2bf5-4506-a4c7-7f3fda4d2b10"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1b8149f9-2bf5-4506-a4c7-7f3fda4d2b10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a15de700-3867-49b6-a751-2b62f09c73eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a15de700-3867-49b6-a751-2b62f09c73eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.955 /home/vagrant/spdk_repo/spdk 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:05.955 00:16:05.955 real 0m11.994s 00:16:05.955 user 0m26.366s 00:16:05.955 sys 0m18.625s 00:16:05.955 ************************************ 00:16:05.955 END TEST bdev_fio 00:16:05.955 ************************************ 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.955 23:17:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:05.955 23:17:37 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:05.955 23:17:37 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:05.955 23:17:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:05.955 23:17:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.955 23:17:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.955 ************************************ 00:16:05.956 START TEST bdev_verify 00:16:05.956 ************************************ 00:16:05.956 23:17:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:05.956 [2024-11-25 23:17:37.705512] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:05.956 [2024-11-25 23:17:37.705628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72949 ] 00:16:05.956 [2024-11-25 23:17:37.864328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:05.956 [2024-11-25 23:17:37.958996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.956 [2024-11-25 23:17:37.959084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.216 Running I/O for 5 seconds... 00:16:08.547 23072.00 IOPS, 90.12 MiB/s [2024-11-25T23:17:41.859Z] 22192.00 IOPS, 86.69 MiB/s [2024-11-25T23:17:42.801Z] 22101.33 IOPS, 86.33 MiB/s [2024-11-25T23:17:43.743Z] 22040.00 IOPS, 86.09 MiB/s [2024-11-25T23:17:43.743Z] 22054.20 IOPS, 86.15 MiB/s 00:16:11.374 Latency(us) 00:16:11.374 [2024-11-25T23:17:43.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.374 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x0 length 0x80000 00:16:11.374 nvme0n1 : 5.05 1798.49 7.03 0.00 0.00 71010.89 5293.29 72997.02 00:16:11.374 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x80000 length 0x80000 00:16:11.374 nvme0n1 : 5.02 1758.78 6.87 0.00 0.00 72611.02 5444.53 83886.08 00:16:11.374 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x0 length 0x80000 00:16:11.374 nvme0n2 : 5.06 1795.38 7.01 0.00 0.00 70958.85 8771.74 72190.42 00:16:11.374 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x80000 length 0x80000 00:16:11.374 nvme0n2 : 5.03 1706.26 6.67 0.00 0.00 74666.48 6452.78 76626.71 00:16:11.374 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x0 length 0x80000 00:16:11.374 nvme0n3 : 5.06 1794.42 7.01 0.00 0.00 70827.74 6326.74 70980.53 00:16:11.374 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x80000 length 0x80000 00:16:11.374 nvme0n3 : 5.06 1720.31 6.72 0.00 0.00 73873.16 5747.00 68560.74 00:16:11.374 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x0 length 0x20000 00:16:11.374 nvme1n1 : 5.07 1792.97 7.00 0.00 0.00 70731.92 7662.67 71787.13 00:16:11.374 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x20000 length 0x20000 00:16:11.374 nvme1n1 : 5.04 1700.38 6.64 0.00 0.00 74556.60 9679.16 75820.11 00:16:11.374 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x0 length 0xa0000 00:16:11.374 nvme2n1 : 5.08 1764.18 6.89 0.00 0.00 71724.65 6326.74 67754.14 00:16:11.374 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0xa0000 length 0xa0000 00:16:11.374 nvme2n1 : 5.07 1541.34 6.02 0.00 0.00 82046.46 5873.03 93968.54 00:16:11.374 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.374 Verification LBA range: start 0x0 length 0xbd0bd 00:16:11.375 nvme3n1 : 5.08 2184.79 8.53 0.00 0.00 57698.18 4587.52 64527.75 00:16:11.375 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.375 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:11.375 nvme3n1 : 5.08 2237.99 8.74 0.00 0.00 56309.83 4436.28 70980.53 00:16:11.375 [2024-11-25T23:17:43.744Z] =================================================================================================================== 00:16:11.375 [2024-11-25T23:17:43.744Z] Total : 21795.29 85.14 0.00 0.00 69867.15 4436.28 93968.54 00:16:11.947 00:16:11.947 real 0m6.549s 00:16:11.947 user 0m10.973s 00:16:11.947 sys 0m1.162s 00:16:11.947 23:17:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.947 23:17:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:11.947 ************************************ 00:16:11.947 END TEST bdev_verify 00:16:11.947 ************************************ 00:16:11.947 23:17:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:11.947 23:17:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:11.947 23:17:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.947 23:17:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.947 ************************************ 00:16:11.947 START TEST bdev_verify_big_io 00:16:11.947 ************************************ 00:16:11.947 23:17:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:12.208 [2024-11-25 23:17:44.322493] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:12.208 [2024-11-25 23:17:44.322630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73042 ] 00:16:12.208 [2024-11-25 23:17:44.483792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:12.469 [2024-11-25 23:17:44.579738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.469 [2024-11-25 23:17:44.579813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.731 Running I/O for 5 seconds... 00:16:17.232 112.00 IOPS, 7.00 MiB/s [2024-11-25T23:17:51.511Z] 1576.00 IOPS, 98.50 MiB/s [2024-11-25T23:17:51.511Z] 2088.67 IOPS, 130.54 MiB/s 00:16:19.142 Latency(us) 00:16:19.142 [2024-11-25T23:17:51.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.142 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x0 length 0x8000 00:16:19.142 nvme0n1 : 5.90 108.39 6.77 0.00 0.00 1136725.70 196003.05 1148594.02 00:16:19.142 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x8000 length 0x8000 00:16:19.142 nvme0n1 : 5.82 123.68 7.73 0.00 0.00 998815.69 29239.14 1155046.79 00:16:19.142 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x0 length 0x8000 00:16:19.142 nvme0n2 : 5.91 116.47 7.28 0.00 0.00 1030702.27 12905.55 1387346.71 00:16:19.142 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x8000 length 0x8000 00:16:19.142 nvme0n2 : 5.69 125.21 7.83 0.00 0.00 942464.61 102841.11 967916.31 00:16:19.142 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x0 length 0x8000 00:16:19.142 nvme0n3 : 5.91 113.71 7.11 0.00 0.00 1021496.34 16938.54 1703532.70 00:16:19.142 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x8000 length 0x8000 00:16:19.142 nvme0n3 : 5.96 104.64 6.54 0.00 0.00 1093338.90 175838.13 1780966.01 00:16:19.142 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.142 Verification LBA range: start 0x0 length 0x2000 00:16:19.143 nvme1n1 : 5.98 93.78 5.86 0.00 0.00 1207371.00 68560.74 2735976.76 00:16:19.143 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.143 Verification LBA range: start 0x2000 length 0x2000 00:16:19.143 nvme1n1 : 6.00 125.26 7.83 0.00 0.00 900875.18 36095.21 1871304.86 00:16:19.143 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.143 Verification LBA range: start 0x0 length 0xa000 00:16:19.143 nvme2n1 : 6.07 121.19 7.57 0.00 0.00 893099.83 563.99 1045349.61 00:16:19.143 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.143 Verification LBA range: start 0xa000 length 0xa000 00:16:19.143 nvme2n1 : 6.32 119.06 7.44 0.00 0.00 887802.05 652.21 1464780.01 00:16:19.143 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.143 Verification LBA range: start 0x0 length 0xbd0b 00:16:19.143 nvme3n1 : 5.99 146.94 9.18 0.00 0.00 717515.29 434.81 1664816.05 00:16:19.143 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.143 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:19.143 nvme3n1 : 6.02 146.25 9.14 0.00 0.00 716889.54 850.71 1568024.42 00:16:19.143 [2024-11-25T23:17:51.512Z] =================================================================================================================== 00:16:19.143 [2024-11-25T23:17:51.512Z] Total : 1444.59 90.29 0.00 0.00 944356.14 434.81 2735976.76 00:16:20.086 00:16:20.086 real 0m8.003s 00:16:20.086 user 0m14.851s 00:16:20.086 sys 0m0.363s 00:16:20.086 23:17:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.086 ************************************ 00:16:20.086 END TEST bdev_verify_big_io 00:16:20.086 ************************************ 00:16:20.086 23:17:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.086 23:17:52 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.086 23:17:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:20.086 23:17:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.086 23:17:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.086 ************************************ 00:16:20.086 START TEST bdev_write_zeroes 00:16:20.086 ************************************ 00:16:20.086 23:17:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.086 [2024-11-25 23:17:52.390771] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:20.086 [2024-11-25 23:17:52.390890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73152 ] 00:16:20.346 [2024-11-25 23:17:52.550336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.346 [2024-11-25 23:17:52.645314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.917 Running I/O for 1 seconds... 00:16:21.862 75680.00 IOPS, 295.62 MiB/s 00:16:21.862 Latency(us) 00:16:21.862 [2024-11-25T23:17:54.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.862 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:21.862 nvme0n1 : 1.02 12577.59 49.13 0.00 0.00 10167.10 3856.54 19862.45 00:16:21.862 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:21.862 nvme0n2 : 1.02 12437.55 48.58 0.00 0.00 10274.12 4159.02 20265.75 00:16:21.862 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:21.862 nvme0n3 : 1.02 12423.53 48.53 0.00 0.00 10276.11 4032.98 20769.87 00:16:21.862 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:21.862 nvme1n1 : 1.02 12356.70 48.27 0.00 0.00 10323.81 4108.60 21475.64 00:16:21.862 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:21.862 nvme2n1 : 1.02 12341.63 48.21 0.00 0.00 10329.61 4209.43 19862.45 00:16:21.862 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:21.862 nvme3n1 : 1.02 13429.12 52.46 0.00 0.00 9484.72 3604.48 20265.75 00:16:21.862 [2024-11-25T23:17:54.231Z] =================================================================================================================== 00:16:21.862 [2024-11-25T23:17:54.231Z] Total : 75566.12 295.18 0.00 0.00 10132.98 3604.48 21475.64 00:16:22.434 00:16:22.434 real 0m2.432s 00:16:22.434 user 0m1.793s 00:16:22.434 sys 0m0.446s 00:16:22.434 23:17:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.434 ************************************ 00:16:22.434 23:17:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:22.434 END TEST bdev_write_zeroes 00:16:22.434 ************************************ 00:16:22.695 23:17:54 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:22.695 23:17:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:22.695 23:17:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.695 23:17:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.695 ************************************ 00:16:22.695 START TEST bdev_json_nonenclosed 00:16:22.695 ************************************ 00:16:22.695 23:17:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:22.695 [2024-11-25 23:17:54.884593] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:22.695 [2024-11-25 23:17:54.884708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73200 ] 00:16:22.695 [2024-11-25 23:17:55.039730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.956 [2024-11-25 23:17:55.137629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.956 [2024-11-25 23:17:55.137706] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:22.956 [2024-11-25 23:17:55.137722] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:22.956 [2024-11-25 23:17:55.137732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:22.956 00:16:22.956 real 0m0.494s 00:16:22.956 user 0m0.299s 00:16:22.956 sys 0m0.091s 00:16:22.956 23:17:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.956 ************************************ 00:16:22.956 END TEST bdev_json_nonenclosed 00:16:22.956 ************************************ 00:16:22.956 23:17:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:23.219 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.219 23:17:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:23.219 23:17:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.219 23:17:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.219 ************************************ 00:16:23.219 START TEST bdev_json_nonarray 00:16:23.219 ************************************ 00:16:23.219 23:17:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.219 [2024-11-25 23:17:55.438523] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:23.219 [2024-11-25 23:17:55.438636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73225 ] 00:16:23.481 [2024-11-25 23:17:55.607590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.481 [2024-11-25 23:17:55.704050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.481 [2024-11-25 23:17:55.704143] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:23.481 [2024-11-25 23:17:55.704160] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:23.481 [2024-11-25 23:17:55.704169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:23.741 00:16:23.741 real 0m0.501s 00:16:23.741 user 0m0.299s 00:16:23.741 sys 0m0.096s 00:16:23.741 23:17:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.741 23:17:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:23.741 ************************************ 00:16:23.741 END TEST bdev_json_nonarray 00:16:23.741 ************************************ 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:23.741 23:17:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:24.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:26.860 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.119 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.380 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.643 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.643 00:16:27.643 real 0m52.661s 00:16:27.643 user 1m20.340s 00:16:27.643 sys 0m35.335s 00:16:27.643 ************************************ 00:16:27.643 END TEST blockdev_xnvme 00:16:27.643 ************************************ 00:16:27.643 23:17:59 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.643 23:17:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.643 23:17:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:27.643 23:17:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.643 23:17:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.643 23:17:59 -- common/autotest_common.sh@10 -- # set +x 00:16:27.643 ************************************ 00:16:27.643 START TEST ublk 00:16:27.643 ************************************ 00:16:27.643 23:17:59 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:27.643 * Looking for test storage... 00:16:27.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:27.643 23:17:59 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.643 23:17:59 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.643 23:17:59 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.643 23:18:00 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.643 23:18:00 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.643 23:18:00 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.643 23:18:00 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.643 23:18:00 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.643 23:18:00 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.643 23:18:00 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.643 23:18:00 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.643 23:18:00 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.643 23:18:00 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.643 23:18:00 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.643 23:18:00 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.643 23:18:00 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:27.643 23:18:00 ublk -- scripts/common.sh@345 -- # : 1 00:16:27.643 23:18:00 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.643 23:18:00 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.643 23:18:00 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:27.904 23:18:00 ublk -- scripts/common.sh@353 -- # local d=1 00:16:27.904 23:18:00 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.904 23:18:00 ublk -- scripts/common.sh@355 -- # echo 1 00:16:27.904 23:18:00 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.904 23:18:00 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:27.904 23:18:00 ublk -- scripts/common.sh@353 -- # local d=2 00:16:27.905 23:18:00 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.905 23:18:00 ublk -- scripts/common.sh@355 -- # echo 2 00:16:27.905 23:18:00 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.905 23:18:00 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.905 23:18:00 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.905 23:18:00 ublk -- scripts/common.sh@368 -- # return 0 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.905 --rc genhtml_branch_coverage=1 00:16:27.905 --rc genhtml_function_coverage=1 00:16:27.905 --rc genhtml_legend=1 00:16:27.905 --rc geninfo_all_blocks=1 00:16:27.905 --rc geninfo_unexecuted_blocks=1 00:16:27.905 00:16:27.905 ' 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.905 --rc genhtml_branch_coverage=1 00:16:27.905 --rc genhtml_function_coverage=1 00:16:27.905 --rc genhtml_legend=1 00:16:27.905 --rc geninfo_all_blocks=1 00:16:27.905 --rc geninfo_unexecuted_blocks=1 00:16:27.905 00:16:27.905 ' 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.905 --rc genhtml_branch_coverage=1 00:16:27.905 --rc genhtml_function_coverage=1 00:16:27.905 --rc genhtml_legend=1 00:16:27.905 --rc geninfo_all_blocks=1 00:16:27.905 --rc geninfo_unexecuted_blocks=1 00:16:27.905 00:16:27.905 ' 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.905 --rc genhtml_branch_coverage=1 00:16:27.905 --rc genhtml_function_coverage=1 00:16:27.905 --rc genhtml_legend=1 00:16:27.905 --rc geninfo_all_blocks=1 00:16:27.905 --rc geninfo_unexecuted_blocks=1 00:16:27.905 00:16:27.905 ' 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:27.905 23:18:00 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:27.905 23:18:00 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:27.905 23:18:00 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:27.905 23:18:00 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:27.905 23:18:00 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:27.905 23:18:00 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:27.905 23:18:00 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:27.905 23:18:00 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:27.905 23:18:00 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.905 23:18:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.905 ************************************ 00:16:27.905 START TEST test_save_ublk_config 00:16:27.905 ************************************ 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73512 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73512 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73512 ']' 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.905 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.905 [2024-11-25 23:18:00.116038] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:27.905 [2024-11-25 23:18:00.116169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73512 ] 00:16:28.166 [2024-11-25 23:18:00.275483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.166 [2024-11-25 23:18:00.371032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.738 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.738 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:28.738 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:28.738 23:18:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:28.738 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.738 23:18:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:28.738 [2024-11-25 23:18:00.980077] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:28.738 [2024-11-25 23:18:00.980871] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:28.738 malloc0 00:16:28.738 [2024-11-25 23:18:01.043186] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:28.738 [2024-11-25 23:18:01.043256] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:28.738 [2024-11-25 23:18:01.043265] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:28.738 [2024-11-25 23:18:01.043272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.738 [2024-11-25 23:18:01.051089] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.738 [2024-11-25 23:18:01.051110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.738 [2024-11-25 23:18:01.059088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.738 [2024-11-25 23:18:01.059180] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:28.738 [2024-11-25 23:18:01.076082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.738 0 00:16:28.738 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.738 23:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:28.738 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.738 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.311 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.311 23:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:29.311 "subsystems": [ 00:16:29.311 { 00:16:29.311 "subsystem": "fsdev", 00:16:29.311 "config": [ 00:16:29.311 { 00:16:29.311 "method": "fsdev_set_opts", 00:16:29.311 "params": { 00:16:29.311 "fsdev_io_pool_size": 65535, 00:16:29.311 "fsdev_io_cache_size": 256 00:16:29.311 } 00:16:29.311 } 00:16:29.311 ] 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "subsystem": "keyring", 00:16:29.311 "config": [] 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "subsystem": "iobuf", 00:16:29.311 "config": [ 00:16:29.311 { 00:16:29.311 "method": "iobuf_set_options", 00:16:29.311 "params": { 00:16:29.311 "small_pool_count": 8192, 00:16:29.311 "large_pool_count": 1024, 00:16:29.311 "small_bufsize": 8192, 00:16:29.311 "large_bufsize": 135168, 00:16:29.311 "enable_numa": false 00:16:29.311 } 00:16:29.311 } 00:16:29.311 ] 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "subsystem": "sock", 00:16:29.311 "config": [ 00:16:29.311 { 00:16:29.311 "method": "sock_set_default_impl", 00:16:29.311 "params": { 00:16:29.311 "impl_name": "posix" 00:16:29.311 } 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "method": "sock_impl_set_options", 00:16:29.311 "params": { 00:16:29.311 "impl_name": "ssl", 00:16:29.311 "recv_buf_size": 4096, 00:16:29.311 "send_buf_size": 4096, 00:16:29.311 "enable_recv_pipe": true, 00:16:29.311 "enable_quickack": false, 00:16:29.311 "enable_placement_id": 0, 00:16:29.311 "enable_zerocopy_send_server": true, 00:16:29.311 "enable_zerocopy_send_client": false, 00:16:29.311 "zerocopy_threshold": 0, 00:16:29.311 "tls_version": 0, 00:16:29.311 "enable_ktls": false 00:16:29.311 } 00:16:29.311 }, 00:16:29.311 { 00:16:29.311 "method": "sock_impl_set_options", 00:16:29.311 "params": { 00:16:29.311 "impl_name": "posix", 00:16:29.311 "recv_buf_size": 2097152, 00:16:29.311 "send_buf_size": 2097152, 00:16:29.311 "enable_recv_pipe": true, 00:16:29.311 "enable_quickack": false, 00:16:29.312 "enable_placement_id": 0, 00:16:29.312 "enable_zerocopy_send_server": true, 00:16:29.312 "enable_zerocopy_send_client": false, 00:16:29.312 "zerocopy_threshold": 0, 00:16:29.312 "tls_version": 0, 00:16:29.312 "enable_ktls": false 00:16:29.312 } 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "vmd", 00:16:29.312 "config": [] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "accel", 00:16:29.312 "config": [ 00:16:29.312 { 00:16:29.312 "method": "accel_set_options", 00:16:29.312 "params": { 00:16:29.312 "small_cache_size": 128, 00:16:29.312 "large_cache_size": 16, 00:16:29.312 "task_count": 2048, 00:16:29.312 "sequence_count": 2048, 00:16:29.312 "buf_count": 2048 00:16:29.312 } 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "bdev", 00:16:29.312 "config": [ 00:16:29.312 { 00:16:29.312 "method": "bdev_set_options", 00:16:29.312 "params": { 00:16:29.312 "bdev_io_pool_size": 65535, 00:16:29.312 "bdev_io_cache_size": 256, 00:16:29.312 "bdev_auto_examine": true, 00:16:29.312 "iobuf_small_cache_size": 128, 00:16:29.312 "iobuf_large_cache_size": 16 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "bdev_raid_set_options", 00:16:29.312 "params": { 00:16:29.312 "process_window_size_kb": 1024, 00:16:29.312 "process_max_bandwidth_mb_sec": 0 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "bdev_iscsi_set_options", 00:16:29.312 "params": { 00:16:29.312 "timeout_sec": 30 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "bdev_nvme_set_options", 00:16:29.312 "params": { 00:16:29.312 "action_on_timeout": "none", 00:16:29.312 "timeout_us": 0, 00:16:29.312 "timeout_admin_us": 0, 00:16:29.312 "keep_alive_timeout_ms": 10000, 00:16:29.312 "arbitration_burst": 0, 00:16:29.312 "low_priority_weight": 0, 00:16:29.312 "medium_priority_weight": 0, 00:16:29.312 "high_priority_weight": 0, 00:16:29.312 "nvme_adminq_poll_period_us": 10000, 00:16:29.312 "nvme_ioq_poll_period_us": 0, 00:16:29.312 "io_queue_requests": 0, 00:16:29.312 "delay_cmd_submit": true, 00:16:29.312 "transport_retry_count": 4, 00:16:29.312 "bdev_retry_count": 3, 00:16:29.312 "transport_ack_timeout": 0, 00:16:29.312 "ctrlr_loss_timeout_sec": 0, 00:16:29.312 "reconnect_delay_sec": 0, 00:16:29.312 "fast_io_fail_timeout_sec": 0, 00:16:29.312 "disable_auto_failback": false, 00:16:29.312 "generate_uuids": false, 00:16:29.312 "transport_tos": 0, 00:16:29.312 "nvme_error_stat": false, 00:16:29.312 "rdma_srq_size": 0, 00:16:29.312 "io_path_stat": false, 00:16:29.312 "allow_accel_sequence": false, 00:16:29.312 "rdma_max_cq_size": 0, 00:16:29.312 "rdma_cm_event_timeout_ms": 0, 00:16:29.312 "dhchap_digests": [ 00:16:29.312 "sha256", 00:16:29.312 "sha384", 00:16:29.312 "sha512" 00:16:29.312 ], 00:16:29.312 "dhchap_dhgroups": [ 00:16:29.312 "null", 00:16:29.312 "ffdhe2048", 00:16:29.312 "ffdhe3072", 00:16:29.312 "ffdhe4096", 00:16:29.312 "ffdhe6144", 00:16:29.312 "ffdhe8192" 00:16:29.312 ] 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "bdev_nvme_set_hotplug", 00:16:29.312 "params": { 00:16:29.312 "period_us": 100000, 00:16:29.312 "enable": false 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "bdev_malloc_create", 00:16:29.312 "params": { 00:16:29.312 "name": "malloc0", 00:16:29.312 "num_blocks": 8192, 00:16:29.312 "block_size": 4096, 00:16:29.312 "physical_block_size": 4096, 00:16:29.312 "uuid": "c70a445c-4479-46e3-961a-e0a5ff1847d3", 00:16:29.312 "optimal_io_boundary": 0, 00:16:29.312 "md_size": 0, 00:16:29.312 "dif_type": 0, 00:16:29.312 "dif_is_head_of_md": false, 00:16:29.312 "dif_pi_format": 0 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "bdev_wait_for_examine" 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "scsi", 00:16:29.312 "config": null 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "scheduler", 00:16:29.312 "config": [ 00:16:29.312 { 00:16:29.312 "method": "framework_set_scheduler", 00:16:29.312 "params": { 00:16:29.312 "name": "static" 00:16:29.312 } 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "vhost_scsi", 00:16:29.312 "config": [] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "vhost_blk", 00:16:29.312 "config": [] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "ublk", 00:16:29.312 "config": [ 00:16:29.312 { 00:16:29.312 "method": "ublk_create_target", 00:16:29.312 "params": { 00:16:29.312 "cpumask": "1" 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "ublk_start_disk", 00:16:29.312 "params": { 00:16:29.312 "bdev_name": "malloc0", 00:16:29.312 "ublk_id": 0, 00:16:29.312 "num_queues": 1, 00:16:29.312 "queue_depth": 128 00:16:29.312 } 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "nbd", 00:16:29.312 "config": [] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "nvmf", 00:16:29.312 "config": [ 00:16:29.312 { 00:16:29.312 "method": "nvmf_set_config", 00:16:29.312 "params": { 00:16:29.312 "discovery_filter": "match_any", 00:16:29.312 "admin_cmd_passthru": { 00:16:29.312 "identify_ctrlr": false 00:16:29.312 }, 00:16:29.312 "dhchap_digests": [ 00:16:29.312 "sha256", 00:16:29.312 "sha384", 00:16:29.312 "sha512" 00:16:29.312 ], 00:16:29.312 "dhchap_dhgroups": [ 00:16:29.312 "null", 00:16:29.312 "ffdhe2048", 00:16:29.312 "ffdhe3072", 00:16:29.312 "ffdhe4096", 00:16:29.312 "ffdhe6144", 00:16:29.312 "ffdhe8192" 00:16:29.312 ] 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "nvmf_set_max_subsystems", 00:16:29.312 "params": { 00:16:29.312 "max_subsystems": 1024 00:16:29.312 } 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "method": "nvmf_set_crdt", 00:16:29.312 "params": { 00:16:29.312 "crdt1": 0, 00:16:29.312 "crdt2": 0, 00:16:29.312 "crdt3": 0 00:16:29.312 } 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }, 00:16:29.312 { 00:16:29.312 "subsystem": "iscsi", 00:16:29.312 "config": [ 00:16:29.312 { 00:16:29.312 "method": "iscsi_set_options", 00:16:29.312 "params": { 00:16:29.312 "node_base": "iqn.2016-06.io.spdk", 00:16:29.312 "max_sessions": 128, 00:16:29.312 "max_connections_per_session": 2, 00:16:29.312 "max_queue_depth": 64, 00:16:29.312 "default_time2wait": 2, 00:16:29.312 "default_time2retain": 20, 00:16:29.312 "first_burst_length": 8192, 00:16:29.312 "immediate_data": true, 00:16:29.312 "allow_duplicated_isid": false, 00:16:29.312 "error_recovery_level": 0, 00:16:29.312 "nop_timeout": 60, 00:16:29.312 "nop_in_interval": 30, 00:16:29.312 "disable_chap": false, 00:16:29.312 "require_chap": false, 00:16:29.312 "mutual_chap": false, 00:16:29.312 "chap_group": 0, 00:16:29.312 "max_large_datain_per_connection": 64, 00:16:29.312 "max_r2t_per_connection": 4, 00:16:29.312 "pdu_pool_size": 36864, 00:16:29.312 "immediate_data_pool_size": 16384, 00:16:29.312 "data_out_pool_size": 2048 00:16:29.312 } 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 } 00:16:29.312 ] 00:16:29.312 }' 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73512 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73512 ']' 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73512 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73512 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73512' 00:16:29.312 killing process with pid 73512 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73512 00:16:29.312 23:18:01 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73512 00:16:30.255 [2024-11-25 23:18:02.427982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.255 [2024-11-25 23:18:02.464136] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.255 [2024-11-25 23:18:02.464278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.255 [2024-11-25 23:18:02.472087] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.255 [2024-11-25 23:18:02.472138] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:30.255 [2024-11-25 23:18:02.472149] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:30.255 [2024-11-25 23:18:02.472172] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:30.255 [2024-11-25 23:18:02.472312] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73567 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73567 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73567 ']' 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.637 23:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:31.637 "subsystems": [ 00:16:31.637 { 00:16:31.637 "subsystem": "fsdev", 00:16:31.637 "config": [ 00:16:31.637 { 00:16:31.637 "method": "fsdev_set_opts", 00:16:31.637 "params": { 00:16:31.637 "fsdev_io_pool_size": 65535, 00:16:31.637 "fsdev_io_cache_size": 256 00:16:31.637 } 00:16:31.637 } 00:16:31.637 ] 00:16:31.637 }, 00:16:31.637 { 00:16:31.637 "subsystem": "keyring", 00:16:31.637 "config": [] 00:16:31.637 }, 00:16:31.637 { 00:16:31.637 "subsystem": "iobuf", 00:16:31.637 "config": [ 00:16:31.637 { 00:16:31.637 "method": "iobuf_set_options", 00:16:31.637 "params": { 00:16:31.637 "small_pool_count": 8192, 00:16:31.637 "large_pool_count": 1024, 00:16:31.637 "small_bufsize": 8192, 00:16:31.637 "large_bufsize": 135168, 00:16:31.637 "enable_numa": false 00:16:31.637 } 00:16:31.637 } 00:16:31.637 ] 00:16:31.637 }, 00:16:31.637 { 00:16:31.637 "subsystem": "sock", 00:16:31.637 "config": [ 00:16:31.637 { 00:16:31.637 "method": "sock_set_default_impl", 00:16:31.637 "params": { 00:16:31.637 "impl_name": "posix" 00:16:31.637 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "sock_impl_set_options", 00:16:31.638 "params": { 00:16:31.638 "impl_name": "ssl", 00:16:31.638 "recv_buf_size": 4096, 00:16:31.638 "send_buf_size": 4096, 00:16:31.638 "enable_recv_pipe": true, 00:16:31.638 "enable_quickack": false, 00:16:31.638 "enable_placement_id": 0, 00:16:31.638 "enable_zerocopy_send_server": true, 00:16:31.638 "enable_zerocopy_send_client": false, 00:16:31.638 "zerocopy_threshold": 0, 00:16:31.638 "tls_version": 0, 00:16:31.638 "enable_ktls": false 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "sock_impl_set_options", 00:16:31.638 "params": { 00:16:31.638 "impl_name": "posix", 00:16:31.638 "recv_buf_size": 2097152, 00:16:31.638 "send_buf_size": 2097152, 00:16:31.638 "enable_recv_pipe": true, 00:16:31.638 "enable_quickack": false, 00:16:31.638 "enable_placement_id": 0, 00:16:31.638 "enable_zerocopy_send_server": true, 00:16:31.638 "enable_zerocopy_send_client": false, 00:16:31.638 "zerocopy_threshold": 0, 00:16:31.638 "tls_version": 0, 00:16:31.638 "enable_ktls": false 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "vmd", 00:16:31.638 "config": [] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "accel", 00:16:31.638 "config": [ 00:16:31.638 { 00:16:31.638 "method": "accel_set_options", 00:16:31.638 "params": { 00:16:31.638 "small_cache_size": 128, 00:16:31.638 "large_cache_size": 16, 00:16:31.638 "task_count": 2048, 00:16:31.638 "sequence_count": 2048, 00:16:31.638 "buf_count": 2048 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "bdev", 00:16:31.638 "config": [ 00:16:31.638 { 00:16:31.638 "method": "bdev_set_options", 00:16:31.638 "params": { 00:16:31.638 "bdev_io_pool_size": 65535, 00:16:31.638 "bdev_io_cache_size": 256, 00:16:31.638 "bdev_auto_examine": true, 00:16:31.638 "iobuf_small_cache_size": 128, 00:16:31.638 "iobuf_large_cache_size": 16 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "bdev_raid_set_options", 00:16:31.638 "params": { 00:16:31.638 "process_window_size_kb": 1024, 00:16:31.638 "process_max_bandwidth_mb_sec": 0 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "bdev_iscsi_set_options", 00:16:31.638 "params": { 00:16:31.638 "timeout_sec": 30 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "bdev_nvme_set_options", 00:16:31.638 "params": { 00:16:31.638 "action_on_timeout": "none", 00:16:31.638 "timeout_us": 0, 00:16:31.638 "timeout_admin_us": 0, 00:16:31.638 "keep_alive_timeout_ms": 10000, 00:16:31.638 "arbitration_burst": 0, 00:16:31.638 "low_priority_weight": 0, 00:16:31.638 "medium_priority_weight": 0, 00:16:31.638 "high_priority_weight": 0, 00:16:31.638 "nvme_adminq_poll_period_us": 10000, 00:16:31.638 "nvme_ioq_poll_period_us": 0, 00:16:31.638 "io_queue_requests": 0, 00:16:31.638 "delay_cmd_submit": true, 00:16:31.638 "transport_retry_count": 4, 00:16:31.638 "bdev_retry_count": 3, 00:16:31.638 "transport_ack_timeout": 0, 00:16:31.638 "ctrlr_loss_timeout_sec": 0, 00:16:31.638 "reconnect_delay_sec": 0, 00:16:31.638 "fast_io_fail_timeout_sec": 0, 00:16:31.638 "disable_auto_failback": false, 00:16:31.638 "generate_uuids": false, 00:16:31.638 "transport_tos": 0, 00:16:31.638 "nvme_error_stat": false, 00:16:31.638 "rdma_srq_size": 0, 00:16:31.638 "io_path_stat": false, 00:16:31.638 "allow_accel_sequence": false, 00:16:31.638 "rdma_max_cq_size": 0, 00:16:31.638 "rdma_cm_event_timeout_ms": 0, 00:16:31.638 "dhchap_digests": [ 00:16:31.638 "sha256", 00:16:31.638 "sha384", 00:16:31.638 "sha512" 00:16:31.638 ], 00:16:31.638 "dhchap_dhgroups": [ 00:16:31.638 "null", 00:16:31.638 "ffdhe2048", 00:16:31.638 "ffdhe3072", 00:16:31.638 "ffdhe4096", 00:16:31.638 "ffdhe6144", 00:16:31.638 "ffdhe8192" 00:16:31.638 ] 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "bdev_nvme_set_hotplug", 00:16:31.638 "params": { 00:16:31.638 "period_us": 100000, 00:16:31.638 "enable": false 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "bdev_malloc_create", 00:16:31.638 "params": { 00:16:31.638 "name": "malloc0", 00:16:31.638 "num_blocks": 8192, 00:16:31.638 "block_size": 4096, 00:16:31.638 "physical_block_size": 4096, 00:16:31.638 "uuid": "c70a445c-4479-46e3-961a-e0a5ff1847d3", 00:16:31.638 "optimal_io_boundary": 0, 00:16:31.638 "md_size": 0, 00:16:31.638 "dif_type": 0, 00:16:31.638 "dif_is_head_of_md": false, 00:16:31.638 "dif_pi_format": 0 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "bdev_wait_for_examine" 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "scsi", 00:16:31.638 "config": null 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "scheduler", 00:16:31.638 "config": [ 00:16:31.638 { 00:16:31.638 "method": "framework_set_scheduler", 00:16:31.638 "params": { 00:16:31.638 "name": "static" 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "vhost_scsi", 00:16:31.638 "config": [] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "vhost_blk", 00:16:31.638 "config": [] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "ublk", 00:16:31.638 "config": [ 00:16:31.638 { 00:16:31.638 "method": "ublk_create_target", 00:16:31.638 "params": { 00:16:31.638 "cpumask": "1" 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "ublk_start_disk", 00:16:31.638 "params": { 00:16:31.638 "bdev_name": "malloc0", 00:16:31.638 "ublk_id": 0, 00:16:31.638 "num_queues": 1, 00:16:31.638 "queue_depth": 128 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "nbd", 00:16:31.638 "config": [] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "nvmf", 00:16:31.638 "config": [ 00:16:31.638 { 00:16:31.638 "method": "nvmf_set_config", 00:16:31.638 "params": { 00:16:31.638 "discovery_filter": "match_any", 00:16:31.638 "admin_cmd_passthru": { 00:16:31.638 "identify_ctrlr": false 00:16:31.638 }, 00:16:31.638 "dhchap_digests": [ 00:16:31.638 "sha256", 00:16:31.638 "sha384", 00:16:31.638 "sha512" 00:16:31.638 ], 00:16:31.638 "dhchap_dhgroups": [ 00:16:31.638 "null", 00:16:31.638 "ffdhe2048", 00:16:31.638 "ffdhe3072", 00:16:31.638 "ffdhe4096", 00:16:31.638 "ffdhe6144", 00:16:31.638 "ffdhe8192" 00:16:31.638 ] 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "nvmf_set_max_subsystems", 00:16:31.638 "params": { 00:16:31.638 "max_subsystems": 1024 00:16:31.638 } 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "method": "nvmf_set_crdt", 00:16:31.638 "params": { 00:16:31.638 "crdt1": 0, 00:16:31.638 "crdt2": 0, 00:16:31.638 "crdt3": 0 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }, 00:16:31.638 { 00:16:31.638 "subsystem": "iscsi", 00:16:31.638 "config": [ 00:16:31.638 { 00:16:31.638 "method": "iscsi_set_options", 00:16:31.638 "params": { 00:16:31.638 "node_base": "iqn.2016-06.io.spdk", 00:16:31.638 "max_sessions": 128, 00:16:31.638 "max_connections_per_session": 2, 00:16:31.638 "max_queue_depth": 64, 00:16:31.638 "default_time2wait": 2, 00:16:31.638 "default_time2retain": 20, 00:16:31.638 "first_burst_length": 8192, 00:16:31.638 "immediate_data": true, 00:16:31.638 "allow_duplicated_isid": false, 00:16:31.638 "error_recovery_level": 0, 00:16:31.638 "nop_timeout": 60, 00:16:31.638 "nop_in_interval": 30, 00:16:31.638 "disable_chap": false, 00:16:31.638 "require_chap": false, 00:16:31.638 "mutual_chap": false, 00:16:31.638 "chap_group": 0, 00:16:31.638 "max_large_datain_per_connection": 64, 00:16:31.638 "max_r2t_per_connection": 4, 00:16:31.638 "pdu_pool_size": 36864, 00:16:31.638 "immediate_data_pool_size": 16384, 00:16:31.638 "data_out_pool_size": 2048 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 } 00:16:31.638 ] 00:16:31.638 }' 00:16:31.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.638 23:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.638 23:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.638 23:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:31.638 [2024-11-25 23:18:03.844193] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:31.638 [2024-11-25 23:18:03.844308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73567 ] 00:16:31.638 [2024-11-25 23:18:03.999127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.896 [2024-11-25 23:18:04.075937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.462 [2024-11-25 23:18:04.717072] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:32.462 [2024-11-25 23:18:04.717709] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:32.462 [2024-11-25 23:18:04.725159] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:32.462 [2024-11-25 23:18:04.725216] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:32.462 [2024-11-25 23:18:04.725224] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:32.462 [2024-11-25 23:18:04.725229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:32.462 [2024-11-25 23:18:04.734122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:32.462 [2024-11-25 23:18:04.734139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:32.462 [2024-11-25 23:18:04.741075] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:32.462 [2024-11-25 23:18:04.741150] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:32.462 [2024-11-25 23:18:04.758071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:32.462 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73567 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73567 ']' 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73567 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73567 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.720 killing process with pid 73567 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73567' 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73567 00:16:32.720 23:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73567 00:16:33.652 [2024-11-25 23:18:05.845990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.652 [2024-11-25 23:18:05.884137] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.652 [2024-11-25 23:18:05.884233] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.652 [2024-11-25 23:18:05.892080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.652 [2024-11-25 23:18:05.892120] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:33.652 [2024-11-25 23:18:05.892126] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:33.652 [2024-11-25 23:18:05.892145] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:33.652 [2024-11-25 23:18:05.892255] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:35.026 23:18:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:35.026 00:16:35.026 real 0m7.015s 00:16:35.026 user 0m4.873s 00:16:35.026 sys 0m2.750s 00:16:35.026 23:18:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.026 23:18:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:35.026 ************************************ 00:16:35.026 END TEST test_save_ublk_config 00:16:35.026 ************************************ 00:16:35.026 23:18:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73637 00:16:35.026 23:18:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.026 23:18:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73637 00:16:35.026 23:18:07 ublk -- common/autotest_common.sh@835 -- # '[' -z 73637 ']' 00:16:35.026 23:18:07 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.026 23:18:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:35.026 23:18:07 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.026 23:18:07 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.026 23:18:07 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.026 23:18:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.026 [2024-11-25 23:18:07.167158] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:35.026 [2024-11-25 23:18:07.167275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73637 ] 00:16:35.026 [2024-11-25 23:18:07.323100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.284 [2024-11-25 23:18:07.402018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.284 [2024-11-25 23:18:07.402043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.851 23:18:07 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.851 23:18:07 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:35.851 23:18:07 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:35.851 23:18:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.851 23:18:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.851 23:18:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.851 ************************************ 00:16:35.851 START TEST test_create_ublk 00:16:35.851 ************************************ 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:35.851 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.851 [2024-11-25 23:18:08.017077] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:35.851 [2024-11-25 23:18:08.018591] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.851 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:35.851 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.851 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:35.851 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.851 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.851 [2024-11-25 23:18:08.169177] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:35.851 [2024-11-25 23:18:08.169473] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:35.851 [2024-11-25 23:18:08.169486] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:35.851 [2024-11-25 23:18:08.169492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:35.851 [2024-11-25 23:18:08.178230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:35.851 [2024-11-25 23:18:08.178247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:35.851 [2024-11-25 23:18:08.185074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:35.851 [2024-11-25 23:18:08.195116] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:36.111 [2024-11-25 23:18:08.221090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.111 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:36.111 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.111 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.111 23:18:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:36.111 { 00:16:36.111 "ublk_device": "/dev/ublkb0", 00:16:36.111 "id": 0, 00:16:36.111 "queue_depth": 512, 00:16:36.111 "num_queues": 4, 00:16:36.111 "bdev_name": "Malloc0" 00:16:36.111 } 00:16:36.111 ]' 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:36.111 23:18:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:36.369 fio: verification read phase will never start because write phase uses all of runtime 00:16:36.369 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:36.369 fio-3.35 00:16:36.369 Starting 1 process 00:16:46.439 00:16:46.439 fio_test: (groupid=0, jobs=1): err= 0: pid=73685: Mon Nov 25 23:18:18 2024 00:16:46.439 write: IOPS=17.1k, BW=66.9MiB/s (70.1MB/s)(669MiB/10001msec); 0 zone resets 00:16:46.439 clat (usec): min=34, max=3929, avg=57.60, stdev=87.13 00:16:46.439 lat (usec): min=35, max=3929, avg=58.05, stdev=87.16 00:16:46.439 clat percentiles (usec): 00:16:46.439 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:16:46.439 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 55], 00:16:46.439 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 73], 00:16:46.439 | 99.00th=[ 88], 99.50th=[ 184], 99.90th=[ 1598], 99.95th=[ 2474], 00:16:46.439 | 99.99th=[ 3458] 00:16:46.439 bw ( KiB/s): min=51632, max=81920, per=100.00%, avg=68611.84, stdev=8773.59, samples=19 00:16:46.439 iops : min=12908, max=20480, avg=17152.95, stdev=2193.41, samples=19 00:16:46.439 lat (usec) : 50=39.48%, 100=59.80%, 250=0.42%, 500=0.14%, 750=0.02% 00:16:46.439 lat (usec) : 1000=0.01% 00:16:46.439 lat (msec) : 2=0.06%, 4=0.08% 00:16:46.439 cpu : usr=3.10%, sys=15.66%, ctx=171233, majf=0, minf=797 00:16:46.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.439 issued rwts: total=0,171254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.439 00:16:46.439 Run status group 0 (all jobs): 00:16:46.439 WRITE: bw=66.9MiB/s (70.1MB/s), 66.9MiB/s-66.9MiB/s (70.1MB/s-70.1MB/s), io=669MiB (701MB), run=10001-10001msec 00:16:46.439 00:16:46.439 Disk stats (read/write): 00:16:46.439 ublkb0: ios=0/169739, merge=0/0, ticks=0/7900, in_queue=7901, util=99.10% 00:16:46.439 23:18:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:46.439 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 [2024-11-25 23:18:18.643376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:46.439 [2024-11-25 23:18:18.678122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:46.439 [2024-11-25 23:18:18.678747] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:46.439 [2024-11-25 23:18:18.683077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:46.439 [2024-11-25 23:18:18.683315] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:46.439 [2024-11-25 23:18:18.683333] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:46.439 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 23:18:18 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:46.439 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:46.439 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.440 [2024-11-25 23:18:18.691146] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:46.440 request: 00:16:46.440 { 00:16:46.440 "ublk_id": 0, 00:16:46.440 "method": "ublk_stop_disk", 00:16:46.440 "req_id": 1 00:16:46.440 } 00:16:46.440 Got JSON-RPC error response 00:16:46.440 response: 00:16:46.440 { 00:16:46.440 "code": -19, 00:16:46.440 "message": "No such device" 00:16:46.440 } 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.440 23:18:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.440 [2024-11-25 23:18:18.711141] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:46.440 [2024-11-25 23:18:18.714984] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:46.440 [2024-11-25 23:18:18.715020] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.440 23:18:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.440 23:18:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.013 23:18:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:47.013 23:18:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:47.013 00:16:47.013 real 0m11.185s 00:16:47.013 user 0m0.628s 00:16:47.013 sys 0m1.645s 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.013 ************************************ 00:16:47.013 END TEST test_create_ublk 00:16:47.013 ************************************ 00:16:47.013 23:18:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.013 23:18:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:47.013 23:18:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:47.013 23:18:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.013 23:18:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.013 ************************************ 00:16:47.013 START TEST test_create_multi_ublk 00:16:47.013 ************************************ 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.013 [2024-11-25 23:18:19.267072] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.013 [2024-11-25 23:18:19.268716] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.013 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.274 [2024-11-25 23:18:19.521193] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:47.274 [2024-11-25 23:18:19.521543] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:47.274 [2024-11-25 23:18:19.521550] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:47.274 [2024-11-25 23:18:19.521559] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.274 [2024-11-25 23:18:19.534301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.274 [2024-11-25 23:18:19.534324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.274 [2024-11-25 23:18:19.545081] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.274 [2024-11-25 23:18:19.545625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:47.274 [2024-11-25 23:18:19.562101] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.274 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.536 [2024-11-25 23:18:19.785184] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:47.536 [2024-11-25 23:18:19.785502] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:47.536 [2024-11-25 23:18:19.785511] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:47.536 [2024-11-25 23:18:19.785517] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.536 [2024-11-25 23:18:19.793092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.536 [2024-11-25 23:18:19.793110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.536 [2024-11-25 23:18:19.801090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.536 [2024-11-25 23:18:19.801625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:47.536 [2024-11-25 23:18:19.806942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.536 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.797 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.797 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:47.797 23:18:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:47.797 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.797 23:18:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.797 [2024-11-25 23:18:19.981173] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:47.797 [2024-11-25 23:18:19.981493] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:47.797 [2024-11-25 23:18:19.981500] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:47.797 [2024-11-25 23:18:19.981506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.797 [2024-11-25 23:18:19.989084] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.797 [2024-11-25 23:18:19.989105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.797 [2024-11-25 23:18:19.997079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.797 [2024-11-25 23:18:19.997603] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:47.797 [2024-11-25 23:18:20.006096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.797 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.797 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:47.797 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.797 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:47.797 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.797 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.058 [2024-11-25 23:18:20.181193] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:48.058 [2024-11-25 23:18:20.181517] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:48.058 [2024-11-25 23:18:20.181526] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:48.058 [2024-11-25 23:18:20.181531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:48.058 [2024-11-25 23:18:20.189096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:48.058 [2024-11-25 23:18:20.189114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:48.058 [2024-11-25 23:18:20.197083] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:48.058 [2024-11-25 23:18:20.197622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:48.058 [2024-11-25 23:18:20.205150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:48.058 { 00:16:48.058 "ublk_device": "/dev/ublkb0", 00:16:48.058 "id": 0, 00:16:48.058 "queue_depth": 512, 00:16:48.058 "num_queues": 4, 00:16:48.058 "bdev_name": "Malloc0" 00:16:48.058 }, 00:16:48.058 { 00:16:48.058 "ublk_device": "/dev/ublkb1", 00:16:48.058 "id": 1, 00:16:48.058 "queue_depth": 512, 00:16:48.058 "num_queues": 4, 00:16:48.058 "bdev_name": "Malloc1" 00:16:48.058 }, 00:16:48.058 { 00:16:48.058 "ublk_device": "/dev/ublkb2", 00:16:48.058 "id": 2, 00:16:48.058 "queue_depth": 512, 00:16:48.058 "num_queues": 4, 00:16:48.058 "bdev_name": "Malloc2" 00:16:48.058 }, 00:16:48.058 { 00:16:48.058 "ublk_device": "/dev/ublkb3", 00:16:48.058 "id": 3, 00:16:48.058 "queue_depth": 512, 00:16:48.058 "num_queues": 4, 00:16:48.058 "bdev_name": "Malloc3" 00:16:48.058 } 00:16:48.058 ]' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:48.058 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.321 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.583 [2024-11-25 23:18:20.869145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.583 [2024-11-25 23:18:20.906650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.583 [2024-11-25 23:18:20.907533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.583 [2024-11-25 23:18:20.917083] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.583 [2024-11-25 23:18:20.917309] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:48.583 [2024-11-25 23:18:20.917318] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.583 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.583 [2024-11-25 23:18:20.933136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.845 [2024-11-25 23:18:20.965112] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.845 [2024-11-25 23:18:20.965760] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.845 [2024-11-25 23:18:20.973097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.845 [2024-11-25 23:18:20.973319] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:48.845 [2024-11-25 23:18:20.973328] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:48.845 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.845 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.845 23:18:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:48.845 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.845 23:18:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.845 [2024-11-25 23:18:20.989140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.845 [2024-11-25 23:18:21.023652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.845 [2024-11-25 23:18:21.024497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.845 [2024-11-25 23:18:21.037074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.845 [2024-11-25 23:18:21.037294] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:48.845 [2024-11-25 23:18:21.037303] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.845 [2024-11-25 23:18:21.048148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.845 [2024-11-25 23:18:21.092102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.845 [2024-11-25 23:18:21.092689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.845 [2024-11-25 23:18:21.096339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.845 [2024-11-25 23:18:21.096550] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:48.845 [2024-11-25 23:18:21.096557] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.845 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:49.107 [2024-11-25 23:18:21.291120] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:49.107 [2024-11-25 23:18:21.294901] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:49.107 [2024-11-25 23:18:21.294926] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:49.107 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:49.107 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.107 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:49.107 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.107 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.367 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.368 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.368 23:18:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:49.368 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.368 23:18:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.941 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:50.202 ************************************ 00:16:50.202 END TEST test_create_multi_ublk 00:16:50.202 ************************************ 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:50.202 00:16:50.202 real 0m3.296s 00:16:50.202 user 0m0.805s 00:16:50.202 sys 0m0.145s 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.202 23:18:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.463 23:18:22 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:50.463 23:18:22 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:50.463 23:18:22 ublk -- ublk/ublk.sh@130 -- # killprocess 73637 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@954 -- # '[' -z 73637 ']' 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@958 -- # kill -0 73637 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@959 -- # uname 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73637 00:16:50.463 killing process with pid 73637 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73637' 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@973 -- # kill 73637 00:16:50.463 23:18:22 ublk -- common/autotest_common.sh@978 -- # wait 73637 00:16:51.036 [2024-11-25 23:18:23.185575] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:51.036 [2024-11-25 23:18:23.185627] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:51.608 00:16:51.608 real 0m24.006s 00:16:51.608 user 0m34.791s 00:16:51.608 sys 0m9.535s 00:16:51.608 23:18:23 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.608 ************************************ 00:16:51.608 END TEST ublk 00:16:51.608 ************************************ 00:16:51.608 23:18:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.608 23:18:23 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:51.608 23:18:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.608 23:18:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.608 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:16:51.608 ************************************ 00:16:51.608 START TEST ublk_recovery 00:16:51.608 ************************************ 00:16:51.608 23:18:23 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:51.870 * Looking for test storage... 00:16:51.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.870 23:18:24 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.870 --rc genhtml_branch_coverage=1 00:16:51.870 --rc genhtml_function_coverage=1 00:16:51.870 --rc genhtml_legend=1 00:16:51.870 --rc geninfo_all_blocks=1 00:16:51.870 --rc geninfo_unexecuted_blocks=1 00:16:51.870 00:16:51.870 ' 00:16:51.870 23:18:24 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.870 --rc genhtml_branch_coverage=1 00:16:51.870 --rc genhtml_function_coverage=1 00:16:51.870 --rc genhtml_legend=1 00:16:51.870 --rc geninfo_all_blocks=1 00:16:51.870 --rc geninfo_unexecuted_blocks=1 00:16:51.870 00:16:51.870 ' 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.871 --rc genhtml_branch_coverage=1 00:16:51.871 --rc genhtml_function_coverage=1 00:16:51.871 --rc genhtml_legend=1 00:16:51.871 --rc geninfo_all_blocks=1 00:16:51.871 --rc geninfo_unexecuted_blocks=1 00:16:51.871 00:16:51.871 ' 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.871 --rc genhtml_branch_coverage=1 00:16:51.871 --rc genhtml_function_coverage=1 00:16:51.871 --rc genhtml_legend=1 00:16:51.871 --rc geninfo_all_blocks=1 00:16:51.871 --rc geninfo_unexecuted_blocks=1 00:16:51.871 00:16:51.871 ' 00:16:51.871 23:18:24 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:51.871 23:18:24 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:51.871 23:18:24 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:51.871 23:18:24 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74040 00:16:51.871 23:18:24 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.871 23:18:24 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:51.871 23:18:24 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74040 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74040 ']' 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.871 23:18:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.871 [2024-11-25 23:18:24.170301] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:16:51.871 [2024-11-25 23:18:24.170429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74040 ] 00:16:52.132 [2024-11-25 23:18:24.332097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:52.133 [2024-11-25 23:18:24.470681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.133 [2024-11-25 23:18:24.470799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:53.078 23:18:25 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.078 [2024-11-25 23:18:25.271099] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:53.078 [2024-11-25 23:18:25.273731] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.078 23:18:25 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.078 malloc0 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.078 23:18:25 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.078 [2024-11-25 23:18:25.407280] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:53.078 [2024-11-25 23:18:25.407419] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:53.078 [2024-11-25 23:18:25.407435] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:53.078 [2024-11-25 23:18:25.407446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:53.078 [2024-11-25 23:18:25.415132] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:53.078 [2024-11-25 23:18:25.415164] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:53.078 [2024-11-25 23:18:25.423133] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:53.078 [2024-11-25 23:18:25.423326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:53.078 [2024-11-25 23:18:25.439232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:53.078 1 00:16:53.078 23:18:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.078 23:18:25 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:54.466 23:18:26 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74076 00:16:54.466 23:18:26 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:54.466 23:18:26 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:54.466 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.466 fio-3.35 00:16:54.466 Starting 1 process 00:16:59.736 23:18:31 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74040 00:16:59.736 23:18:31 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:05.022 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74040 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:05.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.022 23:18:36 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74190 00:17:05.022 23:18:36 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.022 23:18:36 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74190 00:17:05.022 23:18:36 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74190 ']' 00:17:05.022 23:18:36 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.022 23:18:36 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.022 23:18:36 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:05.022 23:18:36 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.022 23:18:36 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.022 23:18:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.022 [2024-11-25 23:18:36.533591] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:17:05.022 [2024-11-25 23:18:36.533717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74190 ] 00:17:05.022 [2024-11-25 23:18:36.691490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:05.022 [2024-11-25 23:18:36.820147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.022 [2024-11-25 23:18:36.820157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:05.281 23:18:37 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.281 [2024-11-25 23:18:37.621108] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:05.281 [2024-11-25 23:18:37.623627] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.281 23:18:37 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.281 23:18:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.542 malloc0 00:17:05.542 23:18:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.542 23:18:37 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:05.542 23:18:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.542 23:18:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.542 [2024-11-25 23:18:37.760286] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:05.542 [2024-11-25 23:18:37.760341] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:05.542 [2024-11-25 23:18:37.760353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:05.542 [2024-11-25 23:18:37.766149] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:05.542 [2024-11-25 23:18:37.766186] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:05.542 1 00:17:05.542 23:18:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.542 23:18:37 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74076 00:17:06.479 [2024-11-25 23:18:38.770084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:06.479 [2024-11-25 23:18:38.779083] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:06.479 [2024-11-25 23:18:38.779103] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:07.854 [2024-11-25 23:18:39.779133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:07.854 [2024-11-25 23:18:39.785077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:07.854 [2024-11-25 23:18:39.785093] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:08.495 [2024-11-25 23:18:40.785113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:08.495 [2024-11-25 23:18:40.789086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:08.495 [2024-11-25 23:18:40.789100] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:08.495 [2024-11-25 23:18:40.789109] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:08.495 [2024-11-25 23:18:40.789189] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:30.453 [2024-11-25 23:19:01.650093] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:30.453 [2024-11-25 23:19:01.653845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:30.453 [2024-11-25 23:19:01.657086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:30.453 [2024-11-25 23:19:01.657100] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:57.011 00:17:57.011 fio_test: (groupid=0, jobs=1): err= 0: pid=74079: Mon Nov 25 23:19:26 2024 00:17:57.011 read: IOPS=14.0k, BW=54.9MiB/s (57.5MB/s)(3291MiB/60002msec) 00:17:57.011 slat (nsec): min=1017, max=681904, avg=5433.92, stdev=2459.35 00:17:57.011 clat (usec): min=773, max=30219k, avg=4585.50, stdev=267413.86 00:17:57.011 lat (usec): min=929, max=30219k, avg=4590.93, stdev=267413.86 00:17:57.011 clat percentiles (usec): 00:17:57.011 | 1.00th=[ 1778], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 1991], 00:17:57.011 | 30.00th=[ 2024], 40.00th=[ 2057], 50.00th=[ 2089], 60.00th=[ 2114], 00:17:57.011 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2343], 95.00th=[ 3195], 00:17:57.011 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7373], 99.95th=[ 8291], 00:17:57.011 | 99.99th=[13173] 00:17:57.011 bw ( KiB/s): min= 1560, max=123936, per=100.00%, avg=110653.60, stdev=19457.56, samples=60 00:17:57.011 iops : min= 390, max=30984, avg=27663.40, stdev=4864.39, samples=60 00:17:57.011 write: IOPS=14.0k, BW=54.8MiB/s (57.4MB/s)(3287MiB/60002msec); 0 zone resets 00:17:57.011 slat (nsec): min=1102, max=487339, avg=5612.47, stdev=2473.50 00:17:57.011 clat (usec): min=871, max=30219k, avg=4523.51, stdev=259358.27 00:17:57.011 lat (usec): min=876, max=30219k, avg=4529.12, stdev=259358.27 00:17:57.011 clat percentiles (usec): 00:17:57.011 | 1.00th=[ 1811], 5.00th=[ 1975], 10.00th=[ 2024], 20.00th=[ 2073], 00:17:57.011 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2180], 60.00th=[ 2212], 00:17:57.011 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2409], 95.00th=[ 3195], 00:17:57.011 | 99.00th=[ 5276], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8356], 00:17:57.011 | 99.99th=[13042] 00:17:57.011 bw ( KiB/s): min= 1528, max=122928, per=100.00%, avg=110509.20, stdev=19410.02, samples=60 00:17:57.011 iops : min= 382, max=30732, avg=27627.30, stdev=4852.50, samples=60 00:17:57.011 lat (usec) : 1000=0.01% 00:17:57.011 lat (msec) : 2=15.72%, 4=81.52%, 10=2.74%, 20=0.02%, >=2000=0.01% 00:17:57.011 cpu : usr=3.25%, sys=15.90%, ctx=56272, majf=0, minf=14 00:17:57.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:57.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.011 issued rwts: total=842582,841458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.011 00:17:57.011 Run status group 0 (all jobs): 00:17:57.011 READ: bw=54.9MiB/s (57.5MB/s), 54.9MiB/s-54.9MiB/s (57.5MB/s-57.5MB/s), io=3291MiB (3451MB), run=60002-60002msec 00:17:57.011 WRITE: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=3287MiB (3447MB), run=60002-60002msec 00:17:57.011 00:17:57.011 Disk stats (read/write): 00:17:57.011 ublkb1: ios=839574/838523, merge=0/0, ticks=3801072/3673216, in_queue=7474288, util=99.91% 00:17:57.011 23:19:26 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.011 [2024-11-25 23:19:26.699276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:57.011 [2024-11-25 23:19:26.739201] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:57.011 [2024-11-25 23:19:26.739358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:57.011 [2024-11-25 23:19:26.746088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:57.011 [2024-11-25 23:19:26.746197] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:57.011 [2024-11-25 23:19:26.746204] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.011 23:19:26 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.011 [2024-11-25 23:19:26.762174] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:57.011 [2024-11-25 23:19:26.770071] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:57.011 [2024-11-25 23:19:26.770104] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.011 23:19:26 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:57.011 23:19:26 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:57.011 23:19:26 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74190 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74190 ']' 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74190 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74190 00:17:57.011 killing process with pid 74190 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74190' 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74190 00:17:57.011 23:19:26 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74190 00:17:57.011 [2024-11-25 23:19:27.978104] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:57.011 [2024-11-25 23:19:27.978170] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:57.011 00:17:57.011 real 1m5.135s 00:17:57.011 user 1m46.519s 00:17:57.011 sys 0m24.417s 00:17:57.011 23:19:29 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.011 23:19:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:57.011 ************************************ 00:17:57.011 END TEST ublk_recovery 00:17:57.011 ************************************ 00:17:57.011 23:19:29 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:57.011 23:19:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:57.011 23:19:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:57.011 23:19:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.012 23:19:29 -- common/autotest_common.sh@10 -- # set +x 00:17:57.012 23:19:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:57.012 23:19:29 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:57.012 23:19:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.012 23:19:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.012 23:19:29 -- common/autotest_common.sh@10 -- # set +x 00:17:57.012 ************************************ 00:17:57.012 START TEST ftl 00:17:57.012 ************************************ 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:57.012 * Looking for test storage... 00:17:57.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.012 23:19:29 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.012 23:19:29 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.012 23:19:29 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.012 23:19:29 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.012 23:19:29 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.012 23:19:29 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:57.012 23:19:29 ftl -- scripts/common.sh@345 -- # : 1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.012 23:19:29 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.012 23:19:29 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@353 -- # local d=1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.012 23:19:29 ftl -- scripts/common.sh@355 -- # echo 1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.012 23:19:29 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@353 -- # local d=2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.012 23:19:29 ftl -- scripts/common.sh@355 -- # echo 2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.012 23:19:29 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.012 23:19:29 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.012 23:19:29 ftl -- scripts/common.sh@368 -- # return 0 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.012 --rc genhtml_branch_coverage=1 00:17:57.012 --rc genhtml_function_coverage=1 00:17:57.012 --rc genhtml_legend=1 00:17:57.012 --rc geninfo_all_blocks=1 00:17:57.012 --rc geninfo_unexecuted_blocks=1 00:17:57.012 00:17:57.012 ' 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.012 --rc genhtml_branch_coverage=1 00:17:57.012 --rc genhtml_function_coverage=1 00:17:57.012 --rc genhtml_legend=1 00:17:57.012 --rc geninfo_all_blocks=1 00:17:57.012 --rc geninfo_unexecuted_blocks=1 00:17:57.012 00:17:57.012 ' 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.012 --rc genhtml_branch_coverage=1 00:17:57.012 --rc genhtml_function_coverage=1 00:17:57.012 --rc genhtml_legend=1 00:17:57.012 --rc geninfo_all_blocks=1 00:17:57.012 --rc geninfo_unexecuted_blocks=1 00:17:57.012 00:17:57.012 ' 00:17:57.012 23:19:29 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.012 --rc genhtml_branch_coverage=1 00:17:57.012 --rc genhtml_function_coverage=1 00:17:57.012 --rc genhtml_legend=1 00:17:57.012 --rc geninfo_all_blocks=1 00:17:57.012 --rc geninfo_unexecuted_blocks=1 00:17:57.012 00:17:57.012 ' 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:57.012 23:19:29 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:57.012 23:19:29 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.012 23:19:29 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:57.012 23:19:29 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:57.012 23:19:29 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:57.012 23:19:29 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.012 23:19:29 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:57.012 23:19:29 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:57.012 23:19:29 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.012 23:19:29 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.012 23:19:29 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:57.012 23:19:29 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:57.012 23:19:29 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:57.012 23:19:29 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:57.012 23:19:29 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:57.012 23:19:29 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:57.012 23:19:29 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.012 23:19:29 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.012 23:19:29 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:57.012 23:19:29 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:57.012 23:19:29 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:57.012 23:19:29 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:57.012 23:19:29 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:57.012 23:19:29 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:57.012 23:19:29 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:57.012 23:19:29 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:57.012 23:19:29 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:57.012 23:19:29 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:57.012 23:19:29 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:57.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:57.533 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.533 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.533 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.533 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:57.533 23:19:29 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74996 00:17:57.533 23:19:29 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:57.533 23:19:29 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74996 00:17:57.533 23:19:29 ftl -- common/autotest_common.sh@835 -- # '[' -z 74996 ']' 00:17:57.533 23:19:29 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.533 23:19:29 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.533 23:19:29 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.533 23:19:29 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.533 23:19:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:57.533 [2024-11-25 23:19:29.885179] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:17:57.533 [2024-11-25 23:19:29.885584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74996 ] 00:17:57.791 [2024-11-25 23:19:30.047806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.050 [2024-11-25 23:19:30.159078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.617 23:19:30 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.617 23:19:30 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:58.617 23:19:30 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:58.617 23:19:30 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:59.551 23:19:31 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:59.551 23:19:31 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:59.810 23:19:32 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:59.810 23:19:32 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:59.810 23:19:32 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:00.068 23:19:32 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@50 -- # break 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:00.069 23:19:32 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:00.327 23:19:32 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:00.327 23:19:32 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:00.327 23:19:32 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:00.327 23:19:32 ftl -- ftl/ftl.sh@63 -- # break 00:18:00.327 23:19:32 ftl -- ftl/ftl.sh@66 -- # killprocess 74996 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 74996 ']' 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@958 -- # kill -0 74996 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@959 -- # uname 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74996 00:18:00.327 killing process with pid 74996 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74996' 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@973 -- # kill 74996 00:18:00.327 23:19:32 ftl -- common/autotest_common.sh@978 -- # wait 74996 00:18:01.341 23:19:33 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:01.341 23:19:33 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:01.341 23:19:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:01.341 23:19:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.341 23:19:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:01.341 ************************************ 00:18:01.341 START TEST ftl_fio_basic 00:18:01.341 ************************************ 00:18:01.341 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:01.601 * Looking for test storage... 00:18:01.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:01.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.602 --rc genhtml_branch_coverage=1 00:18:01.602 --rc genhtml_function_coverage=1 00:18:01.602 --rc genhtml_legend=1 00:18:01.602 --rc geninfo_all_blocks=1 00:18:01.602 --rc geninfo_unexecuted_blocks=1 00:18:01.602 00:18:01.602 ' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:01.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.602 --rc genhtml_branch_coverage=1 00:18:01.602 --rc genhtml_function_coverage=1 00:18:01.602 --rc genhtml_legend=1 00:18:01.602 --rc geninfo_all_blocks=1 00:18:01.602 --rc geninfo_unexecuted_blocks=1 00:18:01.602 00:18:01.602 ' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:01.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.602 --rc genhtml_branch_coverage=1 00:18:01.602 --rc genhtml_function_coverage=1 00:18:01.602 --rc genhtml_legend=1 00:18:01.602 --rc geninfo_all_blocks=1 00:18:01.602 --rc geninfo_unexecuted_blocks=1 00:18:01.602 00:18:01.602 ' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:01.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.602 --rc genhtml_branch_coverage=1 00:18:01.602 --rc genhtml_function_coverage=1 00:18:01.602 --rc genhtml_legend=1 00:18:01.602 --rc geninfo_all_blocks=1 00:18:01.602 --rc geninfo_unexecuted_blocks=1 00:18:01.602 00:18:01.602 ' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75124 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75124 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75124 ']' 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.602 23:19:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:01.602 [2024-11-25 23:19:33.880707] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:18:01.602 [2024-11-25 23:19:33.880952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75124 ] 00:18:01.862 [2024-11-25 23:19:34.031150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.862 [2024-11-25 23:19:34.108829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.862 [2024-11-25 23:19:34.108839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.862 [2024-11-25 23:19:34.108834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:02.429 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:02.687 23:19:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:02.945 { 00:18:02.945 "name": "nvme0n1", 00:18:02.945 "aliases": [ 00:18:02.945 "ba8b7fc6-1c6d-478b-8441-82f2b7cd1877" 00:18:02.945 ], 00:18:02.945 "product_name": "NVMe disk", 00:18:02.945 "block_size": 4096, 00:18:02.945 "num_blocks": 1310720, 00:18:02.945 "uuid": "ba8b7fc6-1c6d-478b-8441-82f2b7cd1877", 00:18:02.945 "numa_id": -1, 00:18:02.945 "assigned_rate_limits": { 00:18:02.945 "rw_ios_per_sec": 0, 00:18:02.945 "rw_mbytes_per_sec": 0, 00:18:02.945 "r_mbytes_per_sec": 0, 00:18:02.945 "w_mbytes_per_sec": 0 00:18:02.945 }, 00:18:02.945 "claimed": false, 00:18:02.945 "zoned": false, 00:18:02.945 "supported_io_types": { 00:18:02.945 "read": true, 00:18:02.945 "write": true, 00:18:02.945 "unmap": true, 00:18:02.945 "flush": true, 00:18:02.945 "reset": true, 00:18:02.945 "nvme_admin": true, 00:18:02.945 "nvme_io": true, 00:18:02.945 "nvme_io_md": false, 00:18:02.945 "write_zeroes": true, 00:18:02.945 "zcopy": false, 00:18:02.945 "get_zone_info": false, 00:18:02.945 "zone_management": false, 00:18:02.945 "zone_append": false, 00:18:02.945 "compare": true, 00:18:02.945 "compare_and_write": false, 00:18:02.945 "abort": true, 00:18:02.945 "seek_hole": false, 00:18:02.945 "seek_data": false, 00:18:02.945 "copy": true, 00:18:02.945 "nvme_iov_md": false 00:18:02.945 }, 00:18:02.945 "driver_specific": { 00:18:02.945 "nvme": [ 00:18:02.945 { 00:18:02.945 "pci_address": "0000:00:11.0", 00:18:02.945 "trid": { 00:18:02.945 "trtype": "PCIe", 00:18:02.945 "traddr": "0000:00:11.0" 00:18:02.945 }, 00:18:02.945 "ctrlr_data": { 00:18:02.945 "cntlid": 0, 00:18:02.945 "vendor_id": "0x1b36", 00:18:02.945 "model_number": "QEMU NVMe Ctrl", 00:18:02.945 "serial_number": "12341", 00:18:02.945 "firmware_revision": "8.0.0", 00:18:02.945 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:02.945 "oacs": { 00:18:02.945 "security": 0, 00:18:02.945 "format": 1, 00:18:02.945 "firmware": 0, 00:18:02.945 "ns_manage": 1 00:18:02.945 }, 00:18:02.945 "multi_ctrlr": false, 00:18:02.945 "ana_reporting": false 00:18:02.945 }, 00:18:02.945 "vs": { 00:18:02.945 "nvme_version": "1.4" 00:18:02.945 }, 00:18:02.945 "ns_data": { 00:18:02.945 "id": 1, 00:18:02.945 "can_share": false 00:18:02.945 } 00:18:02.945 } 00:18:02.945 ], 00:18:02.945 "mp_policy": "active_passive" 00:18:02.945 } 00:18:02.945 } 00:18:02.945 ]' 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:02.945 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:03.203 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:03.203 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:03.462 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c29056ee-bd9f-41ea-be8d-46d874fcd7ec 00:18:03.462 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c29056ee-bd9f-41ea-be8d-46d874fcd7ec 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:03.720 23:19:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:03.720 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:03.720 { 00:18:03.720 "name": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:03.720 "aliases": [ 00:18:03.720 "lvs/nvme0n1p0" 00:18:03.720 ], 00:18:03.720 "product_name": "Logical Volume", 00:18:03.720 "block_size": 4096, 00:18:03.720 "num_blocks": 26476544, 00:18:03.720 "uuid": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:03.720 "assigned_rate_limits": { 00:18:03.720 "rw_ios_per_sec": 0, 00:18:03.720 "rw_mbytes_per_sec": 0, 00:18:03.720 "r_mbytes_per_sec": 0, 00:18:03.721 "w_mbytes_per_sec": 0 00:18:03.721 }, 00:18:03.721 "claimed": false, 00:18:03.721 "zoned": false, 00:18:03.721 "supported_io_types": { 00:18:03.721 "read": true, 00:18:03.721 "write": true, 00:18:03.721 "unmap": true, 00:18:03.721 "flush": false, 00:18:03.721 "reset": true, 00:18:03.721 "nvme_admin": false, 00:18:03.721 "nvme_io": false, 00:18:03.721 "nvme_io_md": false, 00:18:03.721 "write_zeroes": true, 00:18:03.721 "zcopy": false, 00:18:03.721 "get_zone_info": false, 00:18:03.721 "zone_management": false, 00:18:03.721 "zone_append": false, 00:18:03.721 "compare": false, 00:18:03.721 "compare_and_write": false, 00:18:03.721 "abort": false, 00:18:03.721 "seek_hole": true, 00:18:03.721 "seek_data": true, 00:18:03.721 "copy": false, 00:18:03.721 "nvme_iov_md": false 00:18:03.721 }, 00:18:03.721 "driver_specific": { 00:18:03.721 "lvol": { 00:18:03.721 "lvol_store_uuid": "c29056ee-bd9f-41ea-be8d-46d874fcd7ec", 00:18:03.721 "base_bdev": "nvme0n1", 00:18:03.721 "thin_provision": true, 00:18:03.721 "num_allocated_clusters": 0, 00:18:03.721 "snapshot": false, 00:18:03.721 "clone": false, 00:18:03.721 "esnap_clone": false 00:18:03.721 } 00:18:03.721 } 00:18:03.721 } 00:18:03.721 ]' 00:18:03.721 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:03.978 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:04.236 { 00:18:04.236 "name": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:04.236 "aliases": [ 00:18:04.236 "lvs/nvme0n1p0" 00:18:04.236 ], 00:18:04.236 "product_name": "Logical Volume", 00:18:04.236 "block_size": 4096, 00:18:04.236 "num_blocks": 26476544, 00:18:04.236 "uuid": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:04.236 "assigned_rate_limits": { 00:18:04.236 "rw_ios_per_sec": 0, 00:18:04.236 "rw_mbytes_per_sec": 0, 00:18:04.236 "r_mbytes_per_sec": 0, 00:18:04.236 "w_mbytes_per_sec": 0 00:18:04.236 }, 00:18:04.236 "claimed": false, 00:18:04.236 "zoned": false, 00:18:04.236 "supported_io_types": { 00:18:04.236 "read": true, 00:18:04.236 "write": true, 00:18:04.236 "unmap": true, 00:18:04.236 "flush": false, 00:18:04.236 "reset": true, 00:18:04.236 "nvme_admin": false, 00:18:04.236 "nvme_io": false, 00:18:04.236 "nvme_io_md": false, 00:18:04.236 "write_zeroes": true, 00:18:04.236 "zcopy": false, 00:18:04.236 "get_zone_info": false, 00:18:04.236 "zone_management": false, 00:18:04.236 "zone_append": false, 00:18:04.236 "compare": false, 00:18:04.236 "compare_and_write": false, 00:18:04.236 "abort": false, 00:18:04.236 "seek_hole": true, 00:18:04.236 "seek_data": true, 00:18:04.236 "copy": false, 00:18:04.236 "nvme_iov_md": false 00:18:04.236 }, 00:18:04.236 "driver_specific": { 00:18:04.236 "lvol": { 00:18:04.236 "lvol_store_uuid": "c29056ee-bd9f-41ea-be8d-46d874fcd7ec", 00:18:04.236 "base_bdev": "nvme0n1", 00:18:04.236 "thin_provision": true, 00:18:04.236 "num_allocated_clusters": 0, 00:18:04.236 "snapshot": false, 00:18:04.236 "clone": false, 00:18:04.236 "esnap_clone": false 00:18:04.236 } 00:18:04.236 } 00:18:04.236 } 00:18:04.236 ]' 00:18:04.236 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:04.494 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:04.494 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:04.495 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:04.495 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:04.495 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:04.495 23:19:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:04.753 { 00:18:04.753 "name": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:04.753 "aliases": [ 00:18:04.753 "lvs/nvme0n1p0" 00:18:04.753 ], 00:18:04.753 "product_name": "Logical Volume", 00:18:04.753 "block_size": 4096, 00:18:04.753 "num_blocks": 26476544, 00:18:04.753 "uuid": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:04.753 "assigned_rate_limits": { 00:18:04.753 "rw_ios_per_sec": 0, 00:18:04.753 "rw_mbytes_per_sec": 0, 00:18:04.753 "r_mbytes_per_sec": 0, 00:18:04.753 "w_mbytes_per_sec": 0 00:18:04.753 }, 00:18:04.753 "claimed": false, 00:18:04.753 "zoned": false, 00:18:04.753 "supported_io_types": { 00:18:04.753 "read": true, 00:18:04.753 "write": true, 00:18:04.753 "unmap": true, 00:18:04.753 "flush": false, 00:18:04.753 "reset": true, 00:18:04.753 "nvme_admin": false, 00:18:04.753 "nvme_io": false, 00:18:04.753 "nvme_io_md": false, 00:18:04.753 "write_zeroes": true, 00:18:04.753 "zcopy": false, 00:18:04.753 "get_zone_info": false, 00:18:04.753 "zone_management": false, 00:18:04.753 "zone_append": false, 00:18:04.753 "compare": false, 00:18:04.753 "compare_and_write": false, 00:18:04.753 "abort": false, 00:18:04.753 "seek_hole": true, 00:18:04.753 "seek_data": true, 00:18:04.753 "copy": false, 00:18:04.753 "nvme_iov_md": false 00:18:04.753 }, 00:18:04.753 "driver_specific": { 00:18:04.753 "lvol": { 00:18:04.753 "lvol_store_uuid": "c29056ee-bd9f-41ea-be8d-46d874fcd7ec", 00:18:04.753 "base_bdev": "nvme0n1", 00:18:04.753 "thin_provision": true, 00:18:04.753 "num_allocated_clusters": 0, 00:18:04.753 "snapshot": false, 00:18:04.753 "clone": false, 00:18:04.753 "esnap_clone": false 00:18:04.753 } 00:18:04.753 } 00:18:04.753 } 00:18:04.753 ]' 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:04.753 23:19:37 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 27aaf5d0-3161-49e4-a85b-5c52a6e6c048 -c nvc0n1p0 --l2p_dram_limit 60 00:18:05.012 [2024-11-25 23:19:37.280567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.012 [2024-11-25 23:19:37.280611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:05.012 [2024-11-25 23:19:37.280628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.012 [2024-11-25 23:19:37.280637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.012 [2024-11-25 23:19:37.280709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.012 [2024-11-25 23:19:37.280722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.012 [2024-11-25 23:19:37.280735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:05.012 [2024-11-25 23:19:37.280744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.012 [2024-11-25 23:19:37.280796] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:05.012 [2024-11-25 23:19:37.281648] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:05.012 [2024-11-25 23:19:37.281802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.012 [2024-11-25 23:19:37.281818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.012 [2024-11-25 23:19:37.281832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:18:05.012 [2024-11-25 23:19:37.281844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.012 [2024-11-25 23:19:37.281976] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2e7fee10-207a-4749-a3ea-657bace4441a 00:18:05.012 [2024-11-25 23:19:37.283040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.012 [2024-11-25 23:19:37.283077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:05.013 [2024-11-25 23:19:37.283089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:05.013 [2024-11-25 23:19:37.283101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.288051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.288096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.013 [2024-11-25 23:19:37.288107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:18:05.013 [2024-11-25 23:19:37.288137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.288241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.288255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.013 [2024-11-25 23:19:37.288266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:05.013 [2024-11-25 23:19:37.288282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.288338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.288352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:05.013 [2024-11-25 23:19:37.288364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:05.013 [2024-11-25 23:19:37.288375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.288405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.013 [2024-11-25 23:19:37.292711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.292745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.013 [2024-11-25 23:19:37.292762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.311 ms 00:18:05.013 [2024-11-25 23:19:37.292772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.292827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.292839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:05.013 [2024-11-25 23:19:37.292852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:05.013 [2024-11-25 23:19:37.292861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.292909] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:05.013 [2024-11-25 23:19:37.293081] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:05.013 [2024-11-25 23:19:37.293103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:05.013 [2024-11-25 23:19:37.293117] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:05.013 [2024-11-25 23:19:37.293132] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293143] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293155] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:05.013 [2024-11-25 23:19:37.293164] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:05.013 [2024-11-25 23:19:37.293176] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:05.013 [2024-11-25 23:19:37.293185] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:05.013 [2024-11-25 23:19:37.293199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.293209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:05.013 [2024-11-25 23:19:37.293222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:05.013 [2024-11-25 23:19:37.293231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.293328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.013 [2024-11-25 23:19:37.293339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:05.013 [2024-11-25 23:19:37.293351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:05.013 [2024-11-25 23:19:37.293360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.013 [2024-11-25 23:19:37.293493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:05.013 [2024-11-25 23:19:37.293512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:05.013 [2024-11-25 23:19:37.293525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:05.013 [2024-11-25 23:19:37.293556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:05.013 [2024-11-25 23:19:37.293589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.013 [2024-11-25 23:19:37.293609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:05.013 [2024-11-25 23:19:37.293618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:05.013 [2024-11-25 23:19:37.293629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.013 [2024-11-25 23:19:37.293638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:05.013 [2024-11-25 23:19:37.293649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:05.013 [2024-11-25 23:19:37.293658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:05.013 [2024-11-25 23:19:37.293682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:05.013 [2024-11-25 23:19:37.293712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:05.013 [2024-11-25 23:19:37.293742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:05.013 [2024-11-25 23:19:37.293772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:05.013 [2024-11-25 23:19:37.293801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:05.013 [2024-11-25 23:19:37.293833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.013 [2024-11-25 23:19:37.293866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:05.013 [2024-11-25 23:19:37.293874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:05.013 [2024-11-25 23:19:37.293884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.013 [2024-11-25 23:19:37.293894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:05.013 [2024-11-25 23:19:37.293905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:05.013 [2024-11-25 23:19:37.293913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:05.013 [2024-11-25 23:19:37.293936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:05.013 [2024-11-25 23:19:37.293948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.293957] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:05.013 [2024-11-25 23:19:37.293969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:05.013 [2024-11-25 23:19:37.293978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.013 [2024-11-25 23:19:37.293990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.013 [2024-11-25 23:19:37.294000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:05.013 [2024-11-25 23:19:37.294013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:05.013 [2024-11-25 23:19:37.294022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:05.013 [2024-11-25 23:19:37.294033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:05.013 [2024-11-25 23:19:37.294041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:05.013 [2024-11-25 23:19:37.294053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:05.013 [2024-11-25 23:19:37.294075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:05.013 [2024-11-25 23:19:37.294090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.013 [2024-11-25 23:19:37.294102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:05.013 [2024-11-25 23:19:37.294113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:05.013 [2024-11-25 23:19:37.294130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:05.013 [2024-11-25 23:19:37.294141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:05.013 [2024-11-25 23:19:37.294151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:05.013 [2024-11-25 23:19:37.294163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:05.014 [2024-11-25 23:19:37.294172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:05.014 [2024-11-25 23:19:37.294183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:05.014 [2024-11-25 23:19:37.294193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:05.014 [2024-11-25 23:19:37.294206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:05.014 [2024-11-25 23:19:37.294216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:05.014 [2024-11-25 23:19:37.294229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:05.014 [2024-11-25 23:19:37.294239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:05.014 [2024-11-25 23:19:37.294250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:05.014 [2024-11-25 23:19:37.294259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:05.014 [2024-11-25 23:19:37.294271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.014 [2024-11-25 23:19:37.294283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:05.014 [2024-11-25 23:19:37.294294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:05.014 [2024-11-25 23:19:37.294306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:05.014 [2024-11-25 23:19:37.294318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:05.014 [2024-11-25 23:19:37.294328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.014 [2024-11-25 23:19:37.294341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:05.014 [2024-11-25 23:19:37.294351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:18:05.014 [2024-11-25 23:19:37.294363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.014 [2024-11-25 23:19:37.294425] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:05.014 [2024-11-25 23:19:37.294447] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:08.297 [2024-11-25 23:19:40.275798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.275982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:08.297 [2024-11-25 23:19:40.276049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2981.360 ms 00:18:08.297 [2024-11-25 23:19:40.276087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.300932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.301104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.297 [2024-11-25 23:19:40.301166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.624 ms 00:18:08.297 [2024-11-25 23:19:40.301192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.301351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.301382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:08.297 [2024-11-25 23:19:40.301449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:08.297 [2024-11-25 23:19:40.301475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.340372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.340546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.297 [2024-11-25 23:19:40.340623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.836 ms 00:18:08.297 [2024-11-25 23:19:40.340655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.340718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.340932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.297 [2024-11-25 23:19:40.340964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.297 [2024-11-25 23:19:40.340989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.341381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.341502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.297 [2024-11-25 23:19:40.341583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:08.297 [2024-11-25 23:19:40.341613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.341893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.341964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.297 [2024-11-25 23:19:40.342038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:08.297 [2024-11-25 23:19:40.342128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.357361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.357465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.297 [2024-11-25 23:19:40.357517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.189 ms 00:18:08.297 [2024-11-25 23:19:40.357541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.368752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:08.297 [2024-11-25 23:19:40.382588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.382698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:08.297 [2024-11-25 23:19:40.382717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.948 ms 00:18:08.297 [2024-11-25 23:19:40.382725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.432238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.432389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:08.297 [2024-11-25 23:19:40.432450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.479 ms 00:18:08.297 [2024-11-25 23:19:40.432473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.432666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.432692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.297 [2024-11-25 23:19:40.432716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:08.297 [2024-11-25 23:19:40.432772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.456080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.456188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:08.297 [2024-11-25 23:19:40.456239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.237 ms 00:18:08.297 [2024-11-25 23:19:40.456261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.479016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.479129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:08.297 [2024-11-25 23:19:40.479191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.707 ms 00:18:08.297 [2024-11-25 23:19:40.479211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.479786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.479862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.297 [2024-11-25 23:19:40.479914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:18:08.297 [2024-11-25 23:19:40.479936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.544534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.544665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:08.297 [2024-11-25 23:19:40.544725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.548 ms 00:18:08.297 [2024-11-25 23:19:40.544748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.568736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.568851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:08.297 [2024-11-25 23:19:40.568903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.893 ms 00:18:08.297 [2024-11-25 23:19:40.568925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.591634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.591751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:08.297 [2024-11-25 23:19:40.591801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.662 ms 00:18:08.297 [2024-11-25 23:19:40.591822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.614662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.614781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:08.297 [2024-11-25 23:19:40.614801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.788 ms 00:18:08.297 [2024-11-25 23:19:40.614809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.614854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.614864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.297 [2024-11-25 23:19:40.614878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:08.297 [2024-11-25 23:19:40.614885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.614964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.297 [2024-11-25 23:19:40.614974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:08.297 [2024-11-25 23:19:40.614984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:08.297 [2024-11-25 23:19:40.614991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.297 [2024-11-25 23:19:40.615924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3334.940 ms, result 0 00:18:08.297 { 00:18:08.297 "name": "ftl0", 00:18:08.297 "uuid": "2e7fee10-207a-4749-a3ea-657bace4441a" 00:18:08.297 } 00:18:08.297 23:19:40 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:08.297 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:08.297 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.297 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:08.297 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.297 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.298 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:08.557 23:19:40 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:08.816 [ 00:18:08.816 { 00:18:08.816 "name": "ftl0", 00:18:08.816 "aliases": [ 00:18:08.816 "2e7fee10-207a-4749-a3ea-657bace4441a" 00:18:08.816 ], 00:18:08.816 "product_name": "FTL disk", 00:18:08.816 "block_size": 4096, 00:18:08.816 "num_blocks": 20971520, 00:18:08.816 "uuid": "2e7fee10-207a-4749-a3ea-657bace4441a", 00:18:08.816 "assigned_rate_limits": { 00:18:08.816 "rw_ios_per_sec": 0, 00:18:08.816 "rw_mbytes_per_sec": 0, 00:18:08.816 "r_mbytes_per_sec": 0, 00:18:08.816 "w_mbytes_per_sec": 0 00:18:08.816 }, 00:18:08.816 "claimed": false, 00:18:08.816 "zoned": false, 00:18:08.816 "supported_io_types": { 00:18:08.816 "read": true, 00:18:08.816 "write": true, 00:18:08.816 "unmap": true, 00:18:08.816 "flush": true, 00:18:08.816 "reset": false, 00:18:08.816 "nvme_admin": false, 00:18:08.816 "nvme_io": false, 00:18:08.816 "nvme_io_md": false, 00:18:08.816 "write_zeroes": true, 00:18:08.816 "zcopy": false, 00:18:08.816 "get_zone_info": false, 00:18:08.816 "zone_management": false, 00:18:08.816 "zone_append": false, 00:18:08.816 "compare": false, 00:18:08.816 "compare_and_write": false, 00:18:08.816 "abort": false, 00:18:08.816 "seek_hole": false, 00:18:08.816 "seek_data": false, 00:18:08.816 "copy": false, 00:18:08.816 "nvme_iov_md": false 00:18:08.816 }, 00:18:08.816 "driver_specific": { 00:18:08.816 "ftl": { 00:18:08.816 "base_bdev": "27aaf5d0-3161-49e4-a85b-5c52a6e6c048", 00:18:08.816 "cache": "nvc0n1p0" 00:18:08.816 } 00:18:08.816 } 00:18:08.816 } 00:18:08.816 ] 00:18:08.816 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:08.816 23:19:41 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:08.816 23:19:41 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:09.075 23:19:41 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:09.075 23:19:41 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:09.075 [2024-11-25 23:19:41.340529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.340571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.075 [2024-11-25 23:19:41.340582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:09.075 [2024-11-25 23:19:41.340590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.340616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.075 [2024-11-25 23:19:41.342701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.342726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.075 [2024-11-25 23:19:41.342736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:18:09.075 [2024-11-25 23:19:41.342743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.343084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.343096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.075 [2024-11-25 23:19:41.343105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:09.075 [2024-11-25 23:19:41.343111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.345550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.345566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.075 [2024-11-25 23:19:41.345575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:18:09.075 [2024-11-25 23:19:41.345581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.350286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.350307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.075 [2024-11-25 23:19:41.350317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:18:09.075 [2024-11-25 23:19:41.350323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.368152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.368258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.075 [2024-11-25 23:19:41.368284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.764 ms 00:18:09.075 [2024-11-25 23:19:41.368290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.379936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.380033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.075 [2024-11-25 23:19:41.380051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.611 ms 00:18:09.075 [2024-11-25 23:19:41.380068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.380205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.380214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.075 [2024-11-25 23:19:41.380223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:09.075 [2024-11-25 23:19:41.380229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.397779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.397803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.075 [2024-11-25 23:19:41.397812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.529 ms 00:18:09.075 [2024-11-25 23:19:41.397818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.415039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.415072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.075 [2024-11-25 23:19:41.415082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.186 ms 00:18:09.075 [2024-11-25 23:19:41.415087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.075 [2024-11-25 23:19:41.432237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.075 [2024-11-25 23:19:41.432262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.075 [2024-11-25 23:19:41.432271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.116 ms 00:18:09.075 [2024-11-25 23:19:41.432277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.335 [2024-11-25 23:19:41.449865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.335 [2024-11-25 23:19:41.449960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.335 [2024-11-25 23:19:41.449976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.516 ms 00:18:09.335 [2024-11-25 23:19:41.449982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.335 [2024-11-25 23:19:41.450016] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.335 [2024-11-25 23:19:41.450027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.335 [2024-11-25 23:19:41.450388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.336 [2024-11-25 23:19:41.450723] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.336 [2024-11-25 23:19:41.450730] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e7fee10-207a-4749-a3ea-657bace4441a 00:18:09.336 [2024-11-25 23:19:41.450736] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.336 [2024-11-25 23:19:41.450745] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.336 [2024-11-25 23:19:41.450752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.336 [2024-11-25 23:19:41.450759] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.336 [2024-11-25 23:19:41.450764] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.336 [2024-11-25 23:19:41.450771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.336 [2024-11-25 23:19:41.450777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.336 [2024-11-25 23:19:41.450784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.336 [2024-11-25 23:19:41.450788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.336 [2024-11-25 23:19:41.450796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.336 [2024-11-25 23:19:41.450801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.336 [2024-11-25 23:19:41.450809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:18:09.336 [2024-11-25 23:19:41.450815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.336 [2024-11-25 23:19:41.460673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.336 [2024-11-25 23:19:41.460765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.336 [2024-11-25 23:19:41.460780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.829 ms 00:18:09.336 [2024-11-25 23:19:41.460791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.336 [2024-11-25 23:19:41.461086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.336 [2024-11-25 23:19:41.461094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.336 [2024-11-25 23:19:41.461103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:09.336 [2024-11-25 23:19:41.461109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.336 [2024-11-25 23:19:41.496053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.336 [2024-11-25 23:19:41.496095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.336 [2024-11-25 23:19:41.496105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.336 [2024-11-25 23:19:41.496111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.336 [2024-11-25 23:19:41.496163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.336 [2024-11-25 23:19:41.496171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.336 [2024-11-25 23:19:41.496178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.336 [2024-11-25 23:19:41.496184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.336 [2024-11-25 23:19:41.496268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.336 [2024-11-25 23:19:41.496278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.337 [2024-11-25 23:19:41.496285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.496291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.496313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.496320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.337 [2024-11-25 23:19:41.496327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.496332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.558314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.558352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.337 [2024-11-25 23:19:41.558362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.558369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.605992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.337 [2024-11-25 23:19:41.606043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.337 [2024-11-25 23:19:41.606148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.337 [2024-11-25 23:19:41.606232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.337 [2024-11-25 23:19:41.606338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.337 [2024-11-25 23:19:41.606400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.337 [2024-11-25 23:19:41.606456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.337 [2024-11-25 23:19:41.606510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.337 [2024-11-25 23:19:41.606518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.337 [2024-11-25 23:19:41.606524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.337 [2024-11-25 23:19:41.606650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.099 ms, result 0 00:18:09.337 true 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75124 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75124 ']' 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75124 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75124 00:18:09.337 killing process with pid 75124 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75124' 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75124 00:18:09.337 23:19:41 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75124 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:14.601 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:14.602 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:14.602 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:14.602 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:14.602 23:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:14.862 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:14.862 fio-3.35 00:18:14.862 Starting 1 thread 00:18:20.143 00:18:20.143 test: (groupid=0, jobs=1): err= 0: pid=75308: Mon Nov 25 23:19:51 2024 00:18:20.143 read: IOPS=1100, BW=73.1MiB/s (76.7MB/s)(255MiB/3482msec) 00:18:20.143 slat (nsec): min=3838, max=37748, avg=5318.38, stdev=2175.22 00:18:20.143 clat (usec): min=247, max=1262, avg=411.91, stdev=125.47 00:18:20.143 lat (usec): min=252, max=1274, avg=417.23, stdev=125.84 00:18:20.143 clat percentiles (usec): 00:18:20.143 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 302], 20.00th=[ 306], 00:18:20.143 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 371], 60.00th=[ 437], 00:18:20.143 | 70.00th=[ 482], 80.00th=[ 523], 90.00th=[ 545], 95.00th=[ 578], 00:18:20.143 | 99.00th=[ 873], 99.50th=[ 963], 99.90th=[ 1156], 99.95th=[ 1221], 00:18:20.143 | 99.99th=[ 1270] 00:18:20.143 write: IOPS=1108, BW=73.6MiB/s (77.2MB/s)(256MiB/3479msec); 0 zone resets 00:18:20.143 slat (nsec): min=14421, max=51602, avg=22279.98, stdev=4235.28 00:18:20.143 clat (usec): min=276, max=2226, avg=452.65, stdev=153.39 00:18:20.143 lat (usec): min=303, max=2255, avg=474.93, stdev=152.34 00:18:20.143 clat percentiles (usec): 00:18:20.143 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 322], 00:18:20.143 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 392], 60.00th=[ 502], 00:18:20.143 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 652], 00:18:20.143 | 99.00th=[ 963], 99.50th=[ 1090], 99.90th=[ 1434], 99.95th=[ 1532], 00:18:20.143 | 99.99th=[ 2212] 00:18:20.143 bw ( KiB/s): min=59568, max=98872, per=94.82%, avg=71468.00, stdev=15285.52, samples=6 00:18:20.143 iops : min= 876, max= 1454, avg=1051.00, stdev=224.79, samples=6 00:18:20.143 lat (usec) : 250=0.01%, 500=66.59%, 750=30.88%, 1000=1.92% 00:18:20.143 lat (msec) : 2=0.59%, 4=0.01% 00:18:20.143 cpu : usr=99.28%, sys=0.03%, ctx=5, majf=0, minf=1169 00:18:20.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:20.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.143 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:20.143 00:18:20.143 Run status group 0 (all jobs): 00:18:20.143 READ: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=255MiB (267MB), run=3482-3482msec 00:18:20.143 WRITE: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=256MiB (269MB), run=3479-3479msec 00:18:20.713 ----------------------------------------------------- 00:18:20.713 Suppressions used: 00:18:20.713 count bytes template 00:18:20.713 1 5 /usr/src/fio/parse.c 00:18:20.713 1 8 libtcmalloc_minimal.so 00:18:20.713 1 904 libcrypto.so 00:18:20.713 ----------------------------------------------------- 00:18:20.713 00:18:20.713 23:19:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:20.713 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.713 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.713 23:19:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:20.713 23:19:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:20.714 23:19:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:20.714 23:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:20.714 23:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:20.714 23:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:20.714 23:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:20.714 23:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:20.973 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:20.973 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:20.973 fio-3.35 00:18:20.973 Starting 2 threads 00:18:47.521 00:18:47.521 first_half: (groupid=0, jobs=1): err= 0: pid=75400: Mon Nov 25 23:20:18 2024 00:18:47.521 read: IOPS=2716, BW=10.6MiB/s (11.1MB/s)(256MiB/24103msec) 00:18:47.521 slat (nsec): min=2946, max=42968, avg=4929.14, stdev=1027.49 00:18:47.521 clat (usec): min=1475, max=427196, avg=38993.61, stdev=31592.04 00:18:47.521 lat (usec): min=1481, max=427200, avg=38998.54, stdev=31592.07 00:18:47.521 clat percentiles (msec): 00:18:47.521 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:18:47.521 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:47.521 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 45], 95.00th=[ 85], 00:18:47.521 | 99.00th=[ 194], 99.50th=[ 239], 99.90th=[ 309], 99.95th=[ 372], 00:18:47.521 | 99.99th=[ 418] 00:18:47.521 write: IOPS=2723, BW=10.6MiB/s (11.2MB/s)(256MiB/24059msec); 0 zone resets 00:18:47.521 slat (usec): min=3, max=2085, avg= 6.49, stdev= 8.76 00:18:47.521 clat (usec): min=374, max=64851, avg=8093.17, stdev=8612.26 00:18:47.521 lat (usec): min=380, max=64856, avg=8099.66, stdev=8612.45 00:18:47.521 clat percentiles (usec): 00:18:47.521 | 1.00th=[ 881], 5.00th=[ 1385], 10.00th=[ 1893], 20.00th=[ 2966], 00:18:47.521 | 30.00th=[ 3884], 40.00th=[ 4883], 50.00th=[ 5669], 60.00th=[ 6390], 00:18:47.521 | 70.00th=[ 7439], 80.00th=[10683], 90.00th=[16319], 95.00th=[25822], 00:18:47.521 | 99.00th=[43254], 99.50th=[51119], 99.90th=[61080], 99.95th=[62129], 00:18:47.521 | 99.99th=[63701] 00:18:47.521 bw ( KiB/s): min= 424, max=47688, per=99.81%, avg=21750.46, stdev=15112.03, samples=24 00:18:47.521 iops : min= 106, max=11922, avg=5437.58, stdev=3778.00, samples=24 00:18:47.521 lat (usec) : 500=0.02%, 750=0.23%, 1000=0.61% 00:18:47.521 lat (msec) : 2=4.44%, 4=10.49%, 10=24.31%, 20=7.91%, 50=47.96% 00:18:47.521 lat (msec) : 100=1.86%, 250=1.96%, 500=0.22% 00:18:47.521 cpu : usr=99.24%, sys=0.11%, ctx=38, majf=0, minf=5550 00:18:47.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:47.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.521 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.521 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.521 second_half: (groupid=0, jobs=1): err= 0: pid=75401: Mon Nov 25 23:20:18 2024 00:18:47.521 read: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(256MiB/23859msec) 00:18:47.521 slat (nsec): min=3052, max=94584, avg=4783.51, stdev=1046.41 00:18:47.521 clat (msec): min=10, max=284, avg=39.01, stdev=26.42 00:18:47.521 lat (msec): min=10, max=284, avg=39.01, stdev=26.42 00:18:47.521 clat percentiles (msec): 00:18:47.521 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:18:47.521 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:47.521 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 46], 95.00th=[ 83], 00:18:47.521 | 99.00th=[ 180], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 253], 00:18:47.521 | 99.99th=[ 271] 00:18:47.521 write: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(256MiB/22367msec); 0 zone resets 00:18:47.521 slat (usec): min=3, max=3007, avg= 6.45, stdev=15.31 00:18:47.521 clat (usec): min=384, max=48926, avg=7603.72, stdev=5252.06 00:18:47.521 lat (usec): min=391, max=48940, avg=7610.17, stdev=5252.91 00:18:47.521 clat percentiles (usec): 00:18:47.521 | 1.00th=[ 955], 5.00th=[ 2245], 10.00th=[ 3261], 20.00th=[ 4047], 00:18:47.521 | 30.00th=[ 5014], 40.00th=[ 5538], 50.00th=[ 6128], 60.00th=[ 6980], 00:18:47.521 | 70.00th=[ 7767], 80.00th=[10159], 90.00th=[14484], 95.00th=[18220], 00:18:47.521 | 99.00th=[26608], 99.50th=[33817], 99.90th=[43779], 99.95th=[45351], 00:18:47.521 | 99.99th=[46400] 00:18:47.521 bw ( KiB/s): min= 944, max=47624, per=100.00%, avg=22777.52, stdev=15112.63, samples=23 00:18:47.521 iops : min= 236, max=11906, avg=5694.35, stdev=3778.14, samples=23 00:18:47.522 lat (usec) : 500=0.02%, 750=0.15%, 1000=0.38% 00:18:47.522 lat (msec) : 2=1.42%, 4=7.73%, 10=30.21%, 20=8.56%, 50=47.44% 00:18:47.522 lat (msec) : 100=1.97%, 250=2.09%, 500=0.03% 00:18:47.522 cpu : usr=99.31%, sys=0.20%, ctx=68, majf=0, minf=5567 00:18:47.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:47.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.522 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.522 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.522 00:18:47.522 Run status group 0 (all jobs): 00:18:47.522 READ: bw=21.2MiB/s (22.3MB/s), 10.6MiB/s-10.7MiB/s (11.1MB/s-11.2MB/s), io=512MiB (536MB), run=23859-24103msec 00:18:47.522 WRITE: bw=21.3MiB/s (22.3MB/s), 10.6MiB/s-11.4MiB/s (11.2MB/s-12.0MB/s), io=512MiB (537MB), run=22367-24059msec 00:18:47.782 ----------------------------------------------------- 00:18:47.782 Suppressions used: 00:18:47.782 count bytes template 00:18:47.782 2 10 /usr/src/fio/parse.c 00:18:47.782 3 288 /usr/src/fio/iolog.c 00:18:47.782 1 8 libtcmalloc_minimal.so 00:18:47.782 1 904 libcrypto.so 00:18:47.782 ----------------------------------------------------- 00:18:47.782 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:47.782 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:47.783 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:47.783 23:20:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:48.043 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:48.043 fio-3.35 00:18:48.043 Starting 1 thread 00:19:10.015 00:19:10.015 test: (groupid=0, jobs=1): err= 0: pid=75719: Mon Nov 25 23:20:38 2024 00:19:10.015 read: IOPS=6371, BW=24.9MiB/s (26.1MB/s)(255MiB/10234msec) 00:19:10.015 slat (nsec): min=3080, max=41930, avg=6565.04, stdev=2439.66 00:19:10.015 clat (usec): min=919, max=34818, avg=20079.12, stdev=3745.22 00:19:10.015 lat (usec): min=928, max=34825, avg=20085.68, stdev=3746.02 00:19:10.015 clat percentiles (usec): 00:19:10.015 | 1.00th=[14091], 5.00th=[14746], 10.00th=[15664], 20.00th=[16188], 00:19:10.015 | 30.00th=[17171], 40.00th=[18744], 50.00th=[20055], 60.00th=[21103], 00:19:10.015 | 70.00th=[22414], 80.00th=[23462], 90.00th=[24773], 95.00th=[26346], 00:19:10.015 | 99.00th=[29754], 99.50th=[31065], 99.90th=[32637], 99.95th=[32900], 00:19:10.015 | 99.99th=[33817] 00:19:10.015 write: IOPS=9097, BW=35.5MiB/s (37.3MB/s)(256MiB/7204msec); 0 zone resets 00:19:10.015 slat (usec): min=4, max=1713, avg= 7.92, stdev= 8.49 00:19:10.015 clat (usec): min=572, max=73617, avg=14002.78, stdev=15714.22 00:19:10.015 lat (usec): min=578, max=73624, avg=14010.70, stdev=15714.13 00:19:10.015 clat percentiles (usec): 00:19:10.015 | 1.00th=[ 1139], 5.00th=[ 1434], 10.00th=[ 1614], 20.00th=[ 1926], 00:19:10.015 | 30.00th=[ 2278], 40.00th=[ 3195], 50.00th=[10159], 60.00th=[12256], 00:19:10.015 | 70.00th=[14746], 80.00th=[17695], 90.00th=[45351], 95.00th=[49021], 00:19:10.015 | 99.00th=[56886], 99.50th=[60031], 99.90th=[63177], 99.95th=[64750], 00:19:10.015 | 99.99th=[69731] 00:19:10.015 bw ( KiB/s): min=14112, max=44584, per=96.05%, avg=34952.53, stdev=6831.35, samples=15 00:19:10.015 iops : min= 3528, max=11146, avg=8738.13, stdev=1707.84, samples=15 00:19:10.015 lat (usec) : 750=0.01%, 1000=0.17% 00:19:10.015 lat (msec) : 2=11.12%, 4=9.38%, 10=4.07%, 20=41.84%, 50=31.51% 00:19:10.015 lat (msec) : 100=1.91% 00:19:10.015 cpu : usr=98.81%, sys=0.26%, ctx=25, majf=0, minf=5565 00:19:10.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:10.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.015 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.015 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.015 00:19:10.015 Run status group 0 (all jobs): 00:19:10.015 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10234-10234msec 00:19:10.015 WRITE: bw=35.5MiB/s (37.3MB/s), 35.5MiB/s-35.5MiB/s (37.3MB/s-37.3MB/s), io=256MiB (268MB), run=7204-7204msec 00:19:10.015 ----------------------------------------------------- 00:19:10.015 Suppressions used: 00:19:10.015 count bytes template 00:19:10.015 1 5 /usr/src/fio/parse.c 00:19:10.015 2 192 /usr/src/fio/iolog.c 00:19:10.015 1 8 libtcmalloc_minimal.so 00:19:10.015 1 904 libcrypto.so 00:19:10.015 ----------------------------------------------------- 00:19:10.015 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:10.015 Remove shared memory files 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57109 /dev/shm/spdk_tgt_trace.pid74040 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:10.015 ************************************ 00:19:10.015 END TEST ftl_fio_basic 00:19:10.015 ************************************ 00:19:10.015 00:19:10.015 real 1m7.138s 00:19:10.015 user 2m18.987s 00:19:10.015 sys 0m11.251s 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.015 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:10.015 23:20:40 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:10.015 23:20:40 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:10.015 23:20:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.015 23:20:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:10.015 ************************************ 00:19:10.015 START TEST ftl_bdevperf 00:19:10.015 ************************************ 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:10.015 * Looking for test storage... 00:19:10.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.015 23:20:40 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.015 --rc genhtml_branch_coverage=1 00:19:10.015 --rc genhtml_function_coverage=1 00:19:10.015 --rc genhtml_legend=1 00:19:10.015 --rc geninfo_all_blocks=1 00:19:10.015 --rc geninfo_unexecuted_blocks=1 00:19:10.015 00:19:10.015 ' 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.015 --rc genhtml_branch_coverage=1 00:19:10.015 --rc genhtml_function_coverage=1 00:19:10.015 --rc genhtml_legend=1 00:19:10.015 --rc geninfo_all_blocks=1 00:19:10.015 --rc geninfo_unexecuted_blocks=1 00:19:10.015 00:19:10.015 ' 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.015 --rc genhtml_branch_coverage=1 00:19:10.015 --rc genhtml_function_coverage=1 00:19:10.015 --rc genhtml_legend=1 00:19:10.015 --rc geninfo_all_blocks=1 00:19:10.015 --rc geninfo_unexecuted_blocks=1 00:19:10.015 00:19:10.015 ' 00:19:10.015 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.015 --rc genhtml_branch_coverage=1 00:19:10.015 --rc genhtml_function_coverage=1 00:19:10.015 --rc genhtml_legend=1 00:19:10.015 --rc geninfo_all_blocks=1 00:19:10.015 --rc geninfo_unexecuted_blocks=1 00:19:10.015 00:19:10.016 ' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76003 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76003 00:19:10.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76003 ']' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.016 [2024-11-25 23:20:41.097502] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:19:10.016 [2024-11-25 23:20:41.097737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76003 ] 00:19:10.016 [2024-11-25 23:20:41.253564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.016 [2024-11-25 23:20:41.369316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:10.016 23:20:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:10.016 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:10.279 { 00:19:10.279 "name": "nvme0n1", 00:19:10.279 "aliases": [ 00:19:10.279 "34ce5e93-38cd-4774-8ac8-a5b7d1dc7ab2" 00:19:10.279 ], 00:19:10.279 "product_name": "NVMe disk", 00:19:10.279 "block_size": 4096, 00:19:10.279 "num_blocks": 1310720, 00:19:10.279 "uuid": "34ce5e93-38cd-4774-8ac8-a5b7d1dc7ab2", 00:19:10.279 "numa_id": -1, 00:19:10.279 "assigned_rate_limits": { 00:19:10.279 "rw_ios_per_sec": 0, 00:19:10.279 "rw_mbytes_per_sec": 0, 00:19:10.279 "r_mbytes_per_sec": 0, 00:19:10.279 "w_mbytes_per_sec": 0 00:19:10.279 }, 00:19:10.279 "claimed": true, 00:19:10.279 "claim_type": "read_many_write_one", 00:19:10.279 "zoned": false, 00:19:10.279 "supported_io_types": { 00:19:10.279 "read": true, 00:19:10.279 "write": true, 00:19:10.279 "unmap": true, 00:19:10.279 "flush": true, 00:19:10.279 "reset": true, 00:19:10.279 "nvme_admin": true, 00:19:10.279 "nvme_io": true, 00:19:10.279 "nvme_io_md": false, 00:19:10.279 "write_zeroes": true, 00:19:10.279 "zcopy": false, 00:19:10.279 "get_zone_info": false, 00:19:10.279 "zone_management": false, 00:19:10.279 "zone_append": false, 00:19:10.279 "compare": true, 00:19:10.279 "compare_and_write": false, 00:19:10.279 "abort": true, 00:19:10.279 "seek_hole": false, 00:19:10.279 "seek_data": false, 00:19:10.279 "copy": true, 00:19:10.279 "nvme_iov_md": false 00:19:10.279 }, 00:19:10.279 "driver_specific": { 00:19:10.279 "nvme": [ 00:19:10.279 { 00:19:10.279 "pci_address": "0000:00:11.0", 00:19:10.279 "trid": { 00:19:10.279 "trtype": "PCIe", 00:19:10.279 "traddr": "0000:00:11.0" 00:19:10.279 }, 00:19:10.279 "ctrlr_data": { 00:19:10.279 "cntlid": 0, 00:19:10.279 "vendor_id": "0x1b36", 00:19:10.279 "model_number": "QEMU NVMe Ctrl", 00:19:10.279 "serial_number": "12341", 00:19:10.279 "firmware_revision": "8.0.0", 00:19:10.279 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:10.279 "oacs": { 00:19:10.279 "security": 0, 00:19:10.279 "format": 1, 00:19:10.279 "firmware": 0, 00:19:10.279 "ns_manage": 1 00:19:10.279 }, 00:19:10.279 "multi_ctrlr": false, 00:19:10.279 "ana_reporting": false 00:19:10.279 }, 00:19:10.279 "vs": { 00:19:10.279 "nvme_version": "1.4" 00:19:10.279 }, 00:19:10.279 "ns_data": { 00:19:10.279 "id": 1, 00:19:10.279 "can_share": false 00:19:10.279 } 00:19:10.279 } 00:19:10.279 ], 00:19:10.279 "mp_policy": "active_passive" 00:19:10.279 } 00:19:10.279 } 00:19:10.279 ]' 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:10.279 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:10.541 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c29056ee-bd9f-41ea-be8d-46d874fcd7ec 00:19:10.541 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:10.541 23:20:42 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c29056ee-bd9f-41ea-be8d-46d874fcd7ec 00:19:10.802 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=893dee21-3592-42cb-85cf-deb7e828d057 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 893dee21-3592-42cb-85cf-deb7e828d057 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:11.062 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.320 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:11.320 { 00:19:11.320 "name": "4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19", 00:19:11.320 "aliases": [ 00:19:11.320 "lvs/nvme0n1p0" 00:19:11.320 ], 00:19:11.320 "product_name": "Logical Volume", 00:19:11.320 "block_size": 4096, 00:19:11.320 "num_blocks": 26476544, 00:19:11.320 "uuid": "4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19", 00:19:11.320 "assigned_rate_limits": { 00:19:11.320 "rw_ios_per_sec": 0, 00:19:11.320 "rw_mbytes_per_sec": 0, 00:19:11.320 "r_mbytes_per_sec": 0, 00:19:11.320 "w_mbytes_per_sec": 0 00:19:11.320 }, 00:19:11.320 "claimed": false, 00:19:11.320 "zoned": false, 00:19:11.320 "supported_io_types": { 00:19:11.320 "read": true, 00:19:11.320 "write": true, 00:19:11.320 "unmap": true, 00:19:11.321 "flush": false, 00:19:11.321 "reset": true, 00:19:11.321 "nvme_admin": false, 00:19:11.321 "nvme_io": false, 00:19:11.321 "nvme_io_md": false, 00:19:11.321 "write_zeroes": true, 00:19:11.321 "zcopy": false, 00:19:11.321 "get_zone_info": false, 00:19:11.321 "zone_management": false, 00:19:11.321 "zone_append": false, 00:19:11.321 "compare": false, 00:19:11.321 "compare_and_write": false, 00:19:11.321 "abort": false, 00:19:11.321 "seek_hole": true, 00:19:11.321 "seek_data": true, 00:19:11.321 "copy": false, 00:19:11.321 "nvme_iov_md": false 00:19:11.321 }, 00:19:11.321 "driver_specific": { 00:19:11.321 "lvol": { 00:19:11.321 "lvol_store_uuid": "893dee21-3592-42cb-85cf-deb7e828d057", 00:19:11.321 "base_bdev": "nvme0n1", 00:19:11.321 "thin_provision": true, 00:19:11.321 "num_allocated_clusters": 0, 00:19:11.321 "snapshot": false, 00:19:11.321 "clone": false, 00:19:11.321 "esnap_clone": false 00:19:11.321 } 00:19:11.321 } 00:19:11.321 } 00:19:11.321 ]' 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:11.321 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:11.579 23:20:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:11.838 { 00:19:11.838 "name": "4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19", 00:19:11.838 "aliases": [ 00:19:11.838 "lvs/nvme0n1p0" 00:19:11.838 ], 00:19:11.838 "product_name": "Logical Volume", 00:19:11.838 "block_size": 4096, 00:19:11.838 "num_blocks": 26476544, 00:19:11.838 "uuid": "4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19", 00:19:11.838 "assigned_rate_limits": { 00:19:11.838 "rw_ios_per_sec": 0, 00:19:11.838 "rw_mbytes_per_sec": 0, 00:19:11.838 "r_mbytes_per_sec": 0, 00:19:11.838 "w_mbytes_per_sec": 0 00:19:11.838 }, 00:19:11.838 "claimed": false, 00:19:11.838 "zoned": false, 00:19:11.838 "supported_io_types": { 00:19:11.838 "read": true, 00:19:11.838 "write": true, 00:19:11.838 "unmap": true, 00:19:11.838 "flush": false, 00:19:11.838 "reset": true, 00:19:11.838 "nvme_admin": false, 00:19:11.838 "nvme_io": false, 00:19:11.838 "nvme_io_md": false, 00:19:11.838 "write_zeroes": true, 00:19:11.838 "zcopy": false, 00:19:11.838 "get_zone_info": false, 00:19:11.838 "zone_management": false, 00:19:11.838 "zone_append": false, 00:19:11.838 "compare": false, 00:19:11.838 "compare_and_write": false, 00:19:11.838 "abort": false, 00:19:11.838 "seek_hole": true, 00:19:11.838 "seek_data": true, 00:19:11.838 "copy": false, 00:19:11.838 "nvme_iov_md": false 00:19:11.838 }, 00:19:11.838 "driver_specific": { 00:19:11.838 "lvol": { 00:19:11.838 "lvol_store_uuid": "893dee21-3592-42cb-85cf-deb7e828d057", 00:19:11.838 "base_bdev": "nvme0n1", 00:19:11.838 "thin_provision": true, 00:19:11.838 "num_allocated_clusters": 0, 00:19:11.838 "snapshot": false, 00:19:11.838 "clone": false, 00:19:11.838 "esnap_clone": false 00:19:11.838 } 00:19:11.838 } 00:19:11.838 } 00:19:11.838 ]' 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:11.838 23:20:44 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:12.097 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:12.355 { 00:19:12.355 "name": "4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19", 00:19:12.355 "aliases": [ 00:19:12.355 "lvs/nvme0n1p0" 00:19:12.355 ], 00:19:12.355 "product_name": "Logical Volume", 00:19:12.355 "block_size": 4096, 00:19:12.355 "num_blocks": 26476544, 00:19:12.355 "uuid": "4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19", 00:19:12.355 "assigned_rate_limits": { 00:19:12.355 "rw_ios_per_sec": 0, 00:19:12.355 "rw_mbytes_per_sec": 0, 00:19:12.355 "r_mbytes_per_sec": 0, 00:19:12.355 "w_mbytes_per_sec": 0 00:19:12.355 }, 00:19:12.355 "claimed": false, 00:19:12.355 "zoned": false, 00:19:12.355 "supported_io_types": { 00:19:12.355 "read": true, 00:19:12.355 "write": true, 00:19:12.355 "unmap": true, 00:19:12.355 "flush": false, 00:19:12.355 "reset": true, 00:19:12.355 "nvme_admin": false, 00:19:12.355 "nvme_io": false, 00:19:12.355 "nvme_io_md": false, 00:19:12.355 "write_zeroes": true, 00:19:12.355 "zcopy": false, 00:19:12.355 "get_zone_info": false, 00:19:12.355 "zone_management": false, 00:19:12.355 "zone_append": false, 00:19:12.355 "compare": false, 00:19:12.355 "compare_and_write": false, 00:19:12.355 "abort": false, 00:19:12.355 "seek_hole": true, 00:19:12.355 "seek_data": true, 00:19:12.355 "copy": false, 00:19:12.355 "nvme_iov_md": false 00:19:12.355 }, 00:19:12.355 "driver_specific": { 00:19:12.355 "lvol": { 00:19:12.355 "lvol_store_uuid": "893dee21-3592-42cb-85cf-deb7e828d057", 00:19:12.355 "base_bdev": "nvme0n1", 00:19:12.355 "thin_provision": true, 00:19:12.355 "num_allocated_clusters": 0, 00:19:12.355 "snapshot": false, 00:19:12.355 "clone": false, 00:19:12.355 "esnap_clone": false 00:19:12.355 } 00:19:12.355 } 00:19:12.355 } 00:19:12.355 ]' 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:12.355 23:20:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4e8dbd01-6ac0-431b-a1f8-7a5cef6d8b19 -c nvc0n1p0 --l2p_dram_limit 20 00:19:12.615 [2024-11-25 23:20:44.745130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.745175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.615 [2024-11-25 23:20:44.745186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.615 [2024-11-25 23:20:44.745196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.745235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.745246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.615 [2024-11-25 23:20:44.745252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:12.615 [2024-11-25 23:20:44.745260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.745273] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.615 [2024-11-25 23:20:44.747445] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.615 [2024-11-25 23:20:44.747474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.747482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.615 [2024-11-25 23:20:44.747490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:19:12.615 [2024-11-25 23:20:44.747497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.747551] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d58aaf80-e613-45b9-8b16-3b87dc7922e0 00:19:12.615 [2024-11-25 23:20:44.748475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.748583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:12.615 [2024-11-25 23:20:44.748603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:12.615 [2024-11-25 23:20:44.748609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.753242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.753265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.615 [2024-11-25 23:20:44.753274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.602 ms 00:19:12.615 [2024-11-25 23:20:44.753280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.753348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.753355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.615 [2024-11-25 23:20:44.753365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:12.615 [2024-11-25 23:20:44.753371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.753404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.753411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.615 [2024-11-25 23:20:44.753419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:12.615 [2024-11-25 23:20:44.753424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.753440] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.615 [2024-11-25 23:20:44.756253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.756282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.615 [2024-11-25 23:20:44.756289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:19:12.615 [2024-11-25 23:20:44.756300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.756322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.615 [2024-11-25 23:20:44.756330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.615 [2024-11-25 23:20:44.756336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:12.615 [2024-11-25 23:20:44.756343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.615 [2024-11-25 23:20:44.756354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:12.615 [2024-11-25 23:20:44.756464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:12.615 [2024-11-25 23:20:44.756473] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.615 [2024-11-25 23:20:44.756482] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:12.615 [2024-11-25 23:20:44.756490] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756499] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:12.616 [2024-11-25 23:20:44.756512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.616 [2024-11-25 23:20:44.756517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:12.616 [2024-11-25 23:20:44.756524] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:12.616 [2024-11-25 23:20:44.756529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.616 [2024-11-25 23:20:44.756538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.616 [2024-11-25 23:20:44.756545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:12.616 [2024-11-25 23:20:44.756553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.616 [2024-11-25 23:20:44.756615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.616 [2024-11-25 23:20:44.756624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.616 [2024-11-25 23:20:44.756630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:12.616 [2024-11-25 23:20:44.756638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.616 [2024-11-25 23:20:44.756705] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.616 [2024-11-25 23:20:44.756714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.616 [2024-11-25 23:20:44.756721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.616 [2024-11-25 23:20:44.756741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.616 [2024-11-25 23:20:44.756758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.616 [2024-11-25 23:20:44.756769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.616 [2024-11-25 23:20:44.756781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:12.616 [2024-11-25 23:20:44.756786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.616 [2024-11-25 23:20:44.756792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.616 [2024-11-25 23:20:44.756797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:12.616 [2024-11-25 23:20:44.756805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.616 [2024-11-25 23:20:44.756825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.616 [2024-11-25 23:20:44.756844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.616 [2024-11-25 23:20:44.756863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.616 [2024-11-25 23:20:44.756880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.616 [2024-11-25 23:20:44.756898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.616 [2024-11-25 23:20:44.756916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.616 [2024-11-25 23:20:44.756928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.616 [2024-11-25 23:20:44.756934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:12.616 [2024-11-25 23:20:44.756939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.616 [2024-11-25 23:20:44.756945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:12.616 [2024-11-25 23:20:44.756950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:12.616 [2024-11-25 23:20:44.756956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:12.616 [2024-11-25 23:20:44.756968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:12.616 [2024-11-25 23:20:44.756972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.756978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.616 [2024-11-25 23:20:44.756984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.616 [2024-11-25 23:20:44.756991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.616 [2024-11-25 23:20:44.756996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.616 [2024-11-25 23:20:44.757005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.616 [2024-11-25 23:20:44.757010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.616 [2024-11-25 23:20:44.757017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.616 [2024-11-25 23:20:44.757022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.616 [2024-11-25 23:20:44.757029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.616 [2024-11-25 23:20:44.757034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.616 [2024-11-25 23:20:44.757044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.616 [2024-11-25 23:20:44.757051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:12.616 [2024-11-25 23:20:44.757075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:12.616 [2024-11-25 23:20:44.757082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:12.616 [2024-11-25 23:20:44.757088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:12.616 [2024-11-25 23:20:44.757095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:12.616 [2024-11-25 23:20:44.757100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:12.616 [2024-11-25 23:20:44.757107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:12.616 [2024-11-25 23:20:44.757112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:12.616 [2024-11-25 23:20:44.757120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:12.616 [2024-11-25 23:20:44.757126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:12.616 [2024-11-25 23:20:44.757157] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.616 [2024-11-25 23:20:44.757163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.616 [2024-11-25 23:20:44.757179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.616 [2024-11-25 23:20:44.757186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.616 [2024-11-25 23:20:44.757192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.616 [2024-11-25 23:20:44.757199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.616 [2024-11-25 23:20:44.757204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.616 [2024-11-25 23:20:44.757211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:19:12.616 [2024-11-25 23:20:44.757217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.616 [2024-11-25 23:20:44.757244] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:12.616 [2024-11-25 23:20:44.757251] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:15.956 [2024-11-25 23:20:47.914490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.914574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:15.956 [2024-11-25 23:20:47.914594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3157.217 ms 00:19:15.956 [2024-11-25 23:20:47.914603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:47.945973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.946029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.956 [2024-11-25 23:20:47.946048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.129 ms 00:19:15.956 [2024-11-25 23:20:47.946078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:47.946218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.946231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:15.956 [2024-11-25 23:20:47.946247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:15.956 [2024-11-25 23:20:47.946256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:47.994480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.994519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.956 [2024-11-25 23:20:47.994534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.171 ms 00:19:15.956 [2024-11-25 23:20:47.994542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:47.994578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.994587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.956 [2024-11-25 23:20:47.994597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:15.956 [2024-11-25 23:20:47.994606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:47.994969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.994987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.956 [2024-11-25 23:20:47.994998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:15.956 [2024-11-25 23:20:47.995012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:47.995142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:47.995152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.956 [2024-11-25 23:20:47.995163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:15.956 [2024-11-25 23:20:47.995184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.008381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.008414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.956 [2024-11-25 23:20:48.008426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:19:15.956 [2024-11-25 23:20:48.008442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.019957] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:15.956 [2024-11-25 23:20:48.025560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.025595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:15.956 [2024-11-25 23:20:48.025606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.055 ms 00:19:15.956 [2024-11-25 23:20:48.025617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.109750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.109801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:15.956 [2024-11-25 23:20:48.109814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.110 ms 00:19:15.956 [2024-11-25 23:20:48.109824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.110008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.110024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:15.956 [2024-11-25 23:20:48.110033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:15.956 [2024-11-25 23:20:48.110046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.134617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.134666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:15.956 [2024-11-25 23:20:48.134679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.511 ms 00:19:15.956 [2024-11-25 23:20:48.134689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.159106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.159160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:15.956 [2024-11-25 23:20:48.159173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.374 ms 00:19:15.956 [2024-11-25 23:20:48.159183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.159782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.159801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:15.956 [2024-11-25 23:20:48.159811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:15.956 [2024-11-25 23:20:48.159821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.246114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.246176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:15.956 [2024-11-25 23:20:48.246190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.253 ms 00:19:15.956 [2024-11-25 23:20:48.246201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.274808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.274865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:15.956 [2024-11-25 23:20:48.274883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.511 ms 00:19:15.956 [2024-11-25 23:20:48.274894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.956 [2024-11-25 23:20:48.300959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.956 [2024-11-25 23:20:48.301019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:15.956 [2024-11-25 23:20:48.301032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.013 ms 00:19:15.956 [2024-11-25 23:20:48.301044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-11-25 23:20:48.327140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-11-25 23:20:48.327199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:16.218 [2024-11-25 23:20:48.327213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.024 ms 00:19:16.218 [2024-11-25 23:20:48.327225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-11-25 23:20:48.327282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-11-25 23:20:48.327298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:16.218 [2024-11-25 23:20:48.327309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:16.218 [2024-11-25 23:20:48.327319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-11-25 23:20:48.327419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-11-25 23:20:48.327433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:16.218 [2024-11-25 23:20:48.327442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:16.218 [2024-11-25 23:20:48.327452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-11-25 23:20:48.328634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3582.980 ms, result 0 00:19:16.218 { 00:19:16.218 "name": "ftl0", 00:19:16.218 "uuid": "d58aaf80-e613-45b9-8b16-3b87dc7922e0" 00:19:16.218 } 00:19:16.218 23:20:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:16.218 23:20:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:16.218 23:20:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:16.218 23:20:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:16.480 [2024-11-25 23:20:48.636742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:16.480 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:16.480 Zero copy mechanism will not be used. 00:19:16.480 Running I/O for 4 seconds... 00:19:18.367 1095.00 IOPS, 72.71 MiB/s [2024-11-25T23:20:51.679Z] 1041.50 IOPS, 69.16 MiB/s [2024-11-25T23:20:53.063Z] 1058.00 IOPS, 70.26 MiB/s [2024-11-25T23:20:53.063Z] 1080.00 IOPS, 71.72 MiB/s 00:19:20.694 Latency(us) 00:19:20.694 [2024-11-25T23:20:53.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.694 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:20.694 ftl0 : 4.00 1079.71 71.70 0.00 0.00 969.98 230.01 3654.89 00:19:20.694 [2024-11-25T23:20:53.063Z] =================================================================================================================== 00:19:20.694 [2024-11-25T23:20:53.063Z] Total : 1079.71 71.70 0.00 0.00 969.98 230.01 3654.89 00:19:20.694 [2024-11-25 23:20:52.648358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:20.694 { 00:19:20.694 "results": [ 00:19:20.694 { 00:19:20.694 "job": "ftl0", 00:19:20.694 "core_mask": "0x1", 00:19:20.694 "workload": "randwrite", 00:19:20.694 "status": "finished", 00:19:20.694 "queue_depth": 1, 00:19:20.694 "io_size": 69632, 00:19:20.694 "runtime": 4.00199, 00:19:20.694 "iops": 1079.7128428606768, 00:19:20.694 "mibps": 71.69968097121682, 00:19:20.694 "io_failed": 0, 00:19:20.694 "io_timeout": 0, 00:19:20.694 "avg_latency_us": 969.9753547077778, 00:19:20.694 "min_latency_us": 230.00615384615384, 00:19:20.694 "max_latency_us": 3654.892307692308 00:19:20.694 } 00:19:20.694 ], 00:19:20.694 "core_count": 1 00:19:20.694 } 00:19:20.694 23:20:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:20.694 [2024-11-25 23:20:52.740884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:20.694 Running I/O for 4 seconds... 00:19:22.581 7636.00 IOPS, 29.83 MiB/s [2024-11-25T23:20:55.893Z] 7466.50 IOPS, 29.17 MiB/s [2024-11-25T23:20:56.837Z] 6596.00 IOPS, 25.77 MiB/s [2024-11-25T23:20:56.837Z] 6239.75 IOPS, 24.37 MiB/s 00:19:24.468 Latency(us) 00:19:24.468 [2024-11-25T23:20:56.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.468 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:24.469 ftl0 : 4.02 6233.42 24.35 0.00 0.00 20477.21 245.76 48799.11 00:19:24.469 [2024-11-25T23:20:56.838Z] =================================================================================================================== 00:19:24.469 [2024-11-25T23:20:56.838Z] Total : 6233.42 24.35 0.00 0.00 20477.21 0.00 48799.11 00:19:24.469 { 00:19:24.469 "results": [ 00:19:24.469 { 00:19:24.469 "job": "ftl0", 00:19:24.469 "core_mask": "0x1", 00:19:24.469 "workload": "randwrite", 00:19:24.469 "status": "finished", 00:19:24.469 "queue_depth": 128, 00:19:24.469 "io_size": 4096, 00:19:24.469 "runtime": 4.023634, 00:19:24.469 "iops": 6233.419838881966, 00:19:24.469 "mibps": 24.34929624563268, 00:19:24.469 "io_failed": 0, 00:19:24.469 "io_timeout": 0, 00:19:24.469 "avg_latency_us": 20477.212942926457, 00:19:24.469 "min_latency_us": 245.76, 00:19:24.469 "max_latency_us": 48799.11384615384 00:19:24.469 } 00:19:24.469 ], 00:19:24.469 "core_count": 1 00:19:24.469 } 00:19:24.469 [2024-11-25 23:20:56.773647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:24.469 23:20:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:24.730 [2024-11-25 23:20:56.886225] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:24.730 Running I/O for 4 seconds... 00:19:26.623 6207.00 IOPS, 24.25 MiB/s [2024-11-25T23:20:59.934Z] 5361.00 IOPS, 20.94 MiB/s [2024-11-25T23:21:01.319Z] 5424.67 IOPS, 21.19 MiB/s [2024-11-25T23:21:01.319Z] 5276.25 IOPS, 20.61 MiB/s 00:19:28.950 Latency(us) 00:19:28.950 [2024-11-25T23:21:01.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.950 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.950 Verification LBA range: start 0x0 length 0x1400000 00:19:28.950 ftl0 : 4.02 5281.73 20.63 0.00 0.00 24149.93 293.02 34078.72 00:19:28.950 [2024-11-25T23:21:01.319Z] =================================================================================================================== 00:19:28.950 [2024-11-25T23:21:01.319Z] Total : 5281.73 20.63 0.00 0.00 24149.93 0.00 34078.72 00:19:28.950 [2024-11-25 23:21:00.921839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:28.950 { 00:19:28.950 "results": [ 00:19:28.950 { 00:19:28.950 "job": "ftl0", 00:19:28.950 "core_mask": "0x1", 00:19:28.950 "workload": "verify", 00:19:28.950 "status": "finished", 00:19:28.950 "verify_range": { 00:19:28.950 "start": 0, 00:19:28.950 "length": 20971520 00:19:28.950 }, 00:19:28.950 "queue_depth": 128, 00:19:28.950 "io_size": 4096, 00:19:28.950 "runtime": 4.019702, 00:19:28.950 "iops": 5281.734815167891, 00:19:28.950 "mibps": 20.631776621749573, 00:19:28.950 "io_failed": 0, 00:19:28.950 "io_timeout": 0, 00:19:28.950 "avg_latency_us": 24149.926504856834, 00:19:28.950 "min_latency_us": 293.02153846153846, 00:19:28.950 "max_latency_us": 34078.72 00:19:28.950 } 00:19:28.950 ], 00:19:28.950 "core_count": 1 00:19:28.950 } 00:19:28.950 23:21:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:28.950 [2024-11-25 23:21:01.137789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.950 [2024-11-25 23:21:01.137861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.950 [2024-11-25 23:21:01.137876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.950 [2024-11-25 23:21:01.137889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.950 [2024-11-25 23:21:01.137914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.950 [2024-11-25 23:21:01.141346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.950 [2024-11-25 23:21:01.141391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.950 [2024-11-25 23:21:01.141407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:19:28.950 [2024-11-25 23:21:01.141416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.950 [2024-11-25 23:21:01.144566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.950 [2024-11-25 23:21:01.144613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.950 [2024-11-25 23:21:01.144628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:19:28.950 [2024-11-25 23:21:01.144642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.367437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.367692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:29.212 [2024-11-25 23:21:01.367728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 222.765 ms 00:19:29.212 [2024-11-25 23:21:01.367738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.373992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.374038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:29.212 [2024-11-25 23:21:01.374069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:19:29.212 [2024-11-25 23:21:01.374080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.400913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.400964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:29.212 [2024-11-25 23:21:01.400982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.740 ms 00:19:29.212 [2024-11-25 23:21:01.400991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.420009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.420077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:29.212 [2024-11-25 23:21:01.420094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.966 ms 00:19:29.212 [2024-11-25 23:21:01.420104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.420291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.420306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:29.212 [2024-11-25 23:21:01.420323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:19:29.212 [2024-11-25 23:21:01.420332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.446532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.446581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:29.212 [2024-11-25 23:21:01.446596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:19:29.212 [2024-11-25 23:21:01.446604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.471935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.471984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:29.212 [2024-11-25 23:21:01.471999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.278 ms 00:19:29.212 [2024-11-25 23:21:01.472006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.496771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.496819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:29.212 [2024-11-25 23:21:01.496846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.715 ms 00:19:29.212 [2024-11-25 23:21:01.496855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.521604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.212 [2024-11-25 23:21:01.521651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:29.212 [2024-11-25 23:21:01.521668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.633 ms 00:19:29.212 [2024-11-25 23:21:01.521676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.212 [2024-11-25 23:21:01.521724] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:29.212 [2024-11-25 23:21:01.521742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.521997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:29.212 [2024-11-25 23:21:01.522179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:29.213 [2024-11-25 23:21:01.522813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:29.213 [2024-11-25 23:21:01.522825] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d58aaf80-e613-45b9-8b16-3b87dc7922e0 00:19:29.213 [2024-11-25 23:21:01.522834] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:29.213 [2024-11-25 23:21:01.522848] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:29.213 [2024-11-25 23:21:01.522855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:29.213 [2024-11-25 23:21:01.522865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:29.213 [2024-11-25 23:21:01.522872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:29.213 [2024-11-25 23:21:01.522883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:29.213 [2024-11-25 23:21:01.522890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:29.213 [2024-11-25 23:21:01.522900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:29.213 [2024-11-25 23:21:01.522906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:29.213 [2024-11-25 23:21:01.522916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.213 [2024-11-25 23:21:01.522924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:29.213 [2024-11-25 23:21:01.522935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:19:29.213 [2024-11-25 23:21:01.522944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.213 [2024-11-25 23:21:01.537557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.213 [2024-11-25 23:21:01.537762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:29.213 [2024-11-25 23:21:01.537788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.568 ms 00:19:29.213 [2024-11-25 23:21:01.537798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.213 [2024-11-25 23:21:01.538284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.213 [2024-11-25 23:21:01.538305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:29.213 [2024-11-25 23:21:01.538318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:19:29.213 [2024-11-25 23:21:01.538326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.580592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.580789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.475 [2024-11-25 23:21:01.580836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.580847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.580927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.580937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.475 [2024-11-25 23:21:01.580948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.580957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.581050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.581085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.475 [2024-11-25 23:21:01.581098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.581107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.581127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.581137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.475 [2024-11-25 23:21:01.581148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.581156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.673080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.673142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.475 [2024-11-25 23:21:01.673162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.673171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.747426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.747716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.475 [2024-11-25 23:21:01.747741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.747758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.747902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.747918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.475 [2024-11-25 23:21:01.747931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.747939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.747990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.748001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.475 [2024-11-25 23:21:01.748012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.748021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.748170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.748183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.475 [2024-11-25 23:21:01.748202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.748211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.748252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.748263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.475 [2024-11-25 23:21:01.748273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.748281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.748335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.748345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.475 [2024-11-25 23:21:01.748359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.748377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.748443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.475 [2024-11-25 23:21:01.748456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.475 [2024-11-25 23:21:01.748469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.475 [2024-11-25 23:21:01.748480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.475 [2024-11-25 23:21:01.748665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 610.806 ms, result 0 00:19:29.475 true 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76003 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76003 ']' 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76003 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76003 00:19:29.475 killing process with pid 76003 00:19:29.475 Received shutdown signal, test time was about 4.000000 seconds 00:19:29.475 00:19:29.475 Latency(us) 00:19:29.475 [2024-11-25T23:21:01.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.475 [2024-11-25T23:21:01.844Z] =================================================================================================================== 00:19:29.475 [2024-11-25T23:21:01.844Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76003' 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76003 00:19:29.475 23:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76003 00:19:30.862 Remove shared memory files 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:30.862 ************************************ 00:19:30.862 END TEST ftl_bdevperf 00:19:30.862 ************************************ 00:19:30.862 00:19:30.862 real 0m22.046s 00:19:30.862 user 0m24.539s 00:19:30.862 sys 0m0.922s 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.862 23:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:30.862 23:21:02 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:30.862 23:21:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:30.862 23:21:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.862 23:21:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:30.862 ************************************ 00:19:30.862 START TEST ftl_trim 00:19:30.862 ************************************ 00:19:30.862 23:21:02 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:30.862 * Looking for test storage... 00:19:30.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.862 23:21:03 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.862 --rc genhtml_branch_coverage=1 00:19:30.862 --rc genhtml_function_coverage=1 00:19:30.862 --rc genhtml_legend=1 00:19:30.862 --rc geninfo_all_blocks=1 00:19:30.862 --rc geninfo_unexecuted_blocks=1 00:19:30.862 00:19:30.862 ' 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.862 --rc genhtml_branch_coverage=1 00:19:30.862 --rc genhtml_function_coverage=1 00:19:30.862 --rc genhtml_legend=1 00:19:30.862 --rc geninfo_all_blocks=1 00:19:30.862 --rc geninfo_unexecuted_blocks=1 00:19:30.862 00:19:30.862 ' 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.862 --rc genhtml_branch_coverage=1 00:19:30.862 --rc genhtml_function_coverage=1 00:19:30.862 --rc genhtml_legend=1 00:19:30.862 --rc geninfo_all_blocks=1 00:19:30.862 --rc geninfo_unexecuted_blocks=1 00:19:30.862 00:19:30.862 ' 00:19:30.862 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.862 --rc genhtml_branch_coverage=1 00:19:30.862 --rc genhtml_function_coverage=1 00:19:30.863 --rc genhtml_legend=1 00:19:30.863 --rc geninfo_all_blocks=1 00:19:30.863 --rc geninfo_unexecuted_blocks=1 00:19:30.863 00:19:30.863 ' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76347 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76347 00:19:30.863 23:21:03 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:30.863 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76347 ']' 00:19:30.863 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.863 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.863 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.863 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.863 23:21:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:31.124 [2024-11-25 23:21:03.246396] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:19:31.124 [2024-11-25 23:21:03.246719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76347 ] 00:19:31.124 [2024-11-25 23:21:03.412227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.385 [2024-11-25 23:21:03.572192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.385 [2024-11-25 23:21:03.572566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.385 [2024-11-25 23:21:03.572723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:32.328 23:21:04 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:32.328 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:32.590 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:32.590 { 00:19:32.590 "name": "nvme0n1", 00:19:32.590 "aliases": [ 00:19:32.590 "096b5d02-619e-43f5-9bb4-096a85debe40" 00:19:32.590 ], 00:19:32.590 "product_name": "NVMe disk", 00:19:32.590 "block_size": 4096, 00:19:32.590 "num_blocks": 1310720, 00:19:32.590 "uuid": "096b5d02-619e-43f5-9bb4-096a85debe40", 00:19:32.590 "numa_id": -1, 00:19:32.590 "assigned_rate_limits": { 00:19:32.590 "rw_ios_per_sec": 0, 00:19:32.590 "rw_mbytes_per_sec": 0, 00:19:32.590 "r_mbytes_per_sec": 0, 00:19:32.590 "w_mbytes_per_sec": 0 00:19:32.590 }, 00:19:32.590 "claimed": true, 00:19:32.590 "claim_type": "read_many_write_one", 00:19:32.590 "zoned": false, 00:19:32.590 "supported_io_types": { 00:19:32.590 "read": true, 00:19:32.590 "write": true, 00:19:32.590 "unmap": true, 00:19:32.590 "flush": true, 00:19:32.590 "reset": true, 00:19:32.590 "nvme_admin": true, 00:19:32.590 "nvme_io": true, 00:19:32.590 "nvme_io_md": false, 00:19:32.590 "write_zeroes": true, 00:19:32.590 "zcopy": false, 00:19:32.590 "get_zone_info": false, 00:19:32.590 "zone_management": false, 00:19:32.590 "zone_append": false, 00:19:32.590 "compare": true, 00:19:32.590 "compare_and_write": false, 00:19:32.590 "abort": true, 00:19:32.590 "seek_hole": false, 00:19:32.590 "seek_data": false, 00:19:32.590 "copy": true, 00:19:32.590 "nvme_iov_md": false 00:19:32.590 }, 00:19:32.590 "driver_specific": { 00:19:32.590 "nvme": [ 00:19:32.590 { 00:19:32.590 "pci_address": "0000:00:11.0", 00:19:32.590 "trid": { 00:19:32.590 "trtype": "PCIe", 00:19:32.590 "traddr": "0000:00:11.0" 00:19:32.590 }, 00:19:32.590 "ctrlr_data": { 00:19:32.590 "cntlid": 0, 00:19:32.590 "vendor_id": "0x1b36", 00:19:32.590 "model_number": "QEMU NVMe Ctrl", 00:19:32.590 "serial_number": "12341", 00:19:32.590 "firmware_revision": "8.0.0", 00:19:32.590 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:32.590 "oacs": { 00:19:32.590 "security": 0, 00:19:32.590 "format": 1, 00:19:32.590 "firmware": 0, 00:19:32.590 "ns_manage": 1 00:19:32.590 }, 00:19:32.590 "multi_ctrlr": false, 00:19:32.590 "ana_reporting": false 00:19:32.590 }, 00:19:32.590 "vs": { 00:19:32.590 "nvme_version": "1.4" 00:19:32.590 }, 00:19:32.590 "ns_data": { 00:19:32.590 "id": 1, 00:19:32.590 "can_share": false 00:19:32.590 } 00:19:32.590 } 00:19:32.590 ], 00:19:32.590 "mp_policy": "active_passive" 00:19:32.590 } 00:19:32.590 } 00:19:32.590 ]' 00:19:32.590 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:32.590 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:32.590 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:32.852 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:32.852 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:32.852 23:21:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:32.852 23:21:04 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:32.852 23:21:04 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:32.852 23:21:04 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:32.852 23:21:04 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:32.852 23:21:04 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:32.852 23:21:05 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=893dee21-3592-42cb-85cf-deb7e828d057 00:19:32.852 23:21:05 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:32.852 23:21:05 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 893dee21-3592-42cb-85cf-deb7e828d057 00:19:33.114 23:21:05 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:33.376 23:21:05 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562 00:19:33.376 23:21:05 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562 00:19:33.637 23:21:05 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:33.637 23:21:05 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:33.637 23:21:05 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:33.637 23:21:05 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:33.637 23:21:05 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:33.638 23:21:05 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:33.638 23:21:05 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:33.638 23:21:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:33.638 23:21:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.638 23:21:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:33.638 23:21:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:33.638 23:21:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.899 { 00:19:33.899 "name": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:33.899 "aliases": [ 00:19:33.899 "lvs/nvme0n1p0" 00:19:33.899 ], 00:19:33.899 "product_name": "Logical Volume", 00:19:33.899 "block_size": 4096, 00:19:33.899 "num_blocks": 26476544, 00:19:33.899 "uuid": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:33.899 "assigned_rate_limits": { 00:19:33.899 "rw_ios_per_sec": 0, 00:19:33.899 "rw_mbytes_per_sec": 0, 00:19:33.899 "r_mbytes_per_sec": 0, 00:19:33.899 "w_mbytes_per_sec": 0 00:19:33.899 }, 00:19:33.899 "claimed": false, 00:19:33.899 "zoned": false, 00:19:33.899 "supported_io_types": { 00:19:33.899 "read": true, 00:19:33.899 "write": true, 00:19:33.899 "unmap": true, 00:19:33.899 "flush": false, 00:19:33.899 "reset": true, 00:19:33.899 "nvme_admin": false, 00:19:33.899 "nvme_io": false, 00:19:33.899 "nvme_io_md": false, 00:19:33.899 "write_zeroes": true, 00:19:33.899 "zcopy": false, 00:19:33.899 "get_zone_info": false, 00:19:33.899 "zone_management": false, 00:19:33.899 "zone_append": false, 00:19:33.899 "compare": false, 00:19:33.899 "compare_and_write": false, 00:19:33.899 "abort": false, 00:19:33.899 "seek_hole": true, 00:19:33.899 "seek_data": true, 00:19:33.899 "copy": false, 00:19:33.899 "nvme_iov_md": false 00:19:33.899 }, 00:19:33.899 "driver_specific": { 00:19:33.899 "lvol": { 00:19:33.899 "lvol_store_uuid": "fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562", 00:19:33.899 "base_bdev": "nvme0n1", 00:19:33.899 "thin_provision": true, 00:19:33.899 "num_allocated_clusters": 0, 00:19:33.899 "snapshot": false, 00:19:33.899 "clone": false, 00:19:33.899 "esnap_clone": false 00:19:33.899 } 00:19:33.899 } 00:19:33.899 } 00:19:33.899 ]' 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:33.899 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:33.899 23:21:06 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:33.899 23:21:06 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:33.899 23:21:06 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:34.160 23:21:06 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:34.160 23:21:06 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:34.160 23:21:06 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:34.160 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:34.160 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.160 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:34.160 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:34.160 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:34.418 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:34.418 { 00:19:34.418 "name": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:34.418 "aliases": [ 00:19:34.418 "lvs/nvme0n1p0" 00:19:34.418 ], 00:19:34.418 "product_name": "Logical Volume", 00:19:34.418 "block_size": 4096, 00:19:34.418 "num_blocks": 26476544, 00:19:34.418 "uuid": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:34.418 "assigned_rate_limits": { 00:19:34.418 "rw_ios_per_sec": 0, 00:19:34.418 "rw_mbytes_per_sec": 0, 00:19:34.418 "r_mbytes_per_sec": 0, 00:19:34.418 "w_mbytes_per_sec": 0 00:19:34.418 }, 00:19:34.418 "claimed": false, 00:19:34.418 "zoned": false, 00:19:34.418 "supported_io_types": { 00:19:34.418 "read": true, 00:19:34.418 "write": true, 00:19:34.418 "unmap": true, 00:19:34.418 "flush": false, 00:19:34.418 "reset": true, 00:19:34.418 "nvme_admin": false, 00:19:34.418 "nvme_io": false, 00:19:34.418 "nvme_io_md": false, 00:19:34.418 "write_zeroes": true, 00:19:34.418 "zcopy": false, 00:19:34.418 "get_zone_info": false, 00:19:34.418 "zone_management": false, 00:19:34.418 "zone_append": false, 00:19:34.418 "compare": false, 00:19:34.418 "compare_and_write": false, 00:19:34.418 "abort": false, 00:19:34.418 "seek_hole": true, 00:19:34.418 "seek_data": true, 00:19:34.418 "copy": false, 00:19:34.418 "nvme_iov_md": false 00:19:34.419 }, 00:19:34.419 "driver_specific": { 00:19:34.419 "lvol": { 00:19:34.419 "lvol_store_uuid": "fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562", 00:19:34.419 "base_bdev": "nvme0n1", 00:19:34.419 "thin_provision": true, 00:19:34.419 "num_allocated_clusters": 0, 00:19:34.419 "snapshot": false, 00:19:34.419 "clone": false, 00:19:34.419 "esnap_clone": false 00:19:34.419 } 00:19:34.419 } 00:19:34.419 } 00:19:34.419 ]' 00:19:34.419 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:34.419 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:34.419 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:34.419 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:34.419 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:34.419 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:34.419 23:21:06 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:34.419 23:21:06 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:34.677 23:21:06 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:34.677 23:21:06 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:34.677 23:21:06 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:34.677 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:34.677 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.677 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:34.677 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:34.677 23:21:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f 00:19:34.935 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:34.935 { 00:19:34.935 "name": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:34.935 "aliases": [ 00:19:34.935 "lvs/nvme0n1p0" 00:19:34.935 ], 00:19:34.935 "product_name": "Logical Volume", 00:19:34.935 "block_size": 4096, 00:19:34.935 "num_blocks": 26476544, 00:19:34.935 "uuid": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:34.935 "assigned_rate_limits": { 00:19:34.935 "rw_ios_per_sec": 0, 00:19:34.935 "rw_mbytes_per_sec": 0, 00:19:34.935 "r_mbytes_per_sec": 0, 00:19:34.935 "w_mbytes_per_sec": 0 00:19:34.935 }, 00:19:34.936 "claimed": false, 00:19:34.936 "zoned": false, 00:19:34.936 "supported_io_types": { 00:19:34.936 "read": true, 00:19:34.936 "write": true, 00:19:34.936 "unmap": true, 00:19:34.936 "flush": false, 00:19:34.936 "reset": true, 00:19:34.936 "nvme_admin": false, 00:19:34.936 "nvme_io": false, 00:19:34.936 "nvme_io_md": false, 00:19:34.936 "write_zeroes": true, 00:19:34.936 "zcopy": false, 00:19:34.936 "get_zone_info": false, 00:19:34.936 "zone_management": false, 00:19:34.936 "zone_append": false, 00:19:34.936 "compare": false, 00:19:34.936 "compare_and_write": false, 00:19:34.936 "abort": false, 00:19:34.936 "seek_hole": true, 00:19:34.936 "seek_data": true, 00:19:34.936 "copy": false, 00:19:34.936 "nvme_iov_md": false 00:19:34.936 }, 00:19:34.936 "driver_specific": { 00:19:34.936 "lvol": { 00:19:34.936 "lvol_store_uuid": "fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562", 00:19:34.936 "base_bdev": "nvme0n1", 00:19:34.936 "thin_provision": true, 00:19:34.936 "num_allocated_clusters": 0, 00:19:34.936 "snapshot": false, 00:19:34.936 "clone": false, 00:19:34.936 "esnap_clone": false 00:19:34.936 } 00:19:34.936 } 00:19:34.936 } 00:19:34.936 ]' 00:19:34.936 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:34.936 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:34.936 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:34.936 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:34.936 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:34.936 23:21:07 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:34.936 23:21:07 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:34.936 23:21:07 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 672523ec-4f95-4e2f-88b6-8b19a1c1ab9f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:35.195 [2024-11-25 23:21:07.317173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.195 [2024-11-25 23:21:07.317213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.195 [2024-11-25 23:21:07.317227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.195 [2024-11-25 23:21:07.317234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.195 [2024-11-25 23:21:07.319530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.195 [2024-11-25 23:21:07.319646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.196 [2024-11-25 23:21:07.319661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:19:35.196 [2024-11-25 23:21:07.319667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.319731] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.196 [2024-11-25 23:21:07.320256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.196 [2024-11-25 23:21:07.320277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.320284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.196 [2024-11-25 23:21:07.320293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:19:35.196 [2024-11-25 23:21:07.320299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.320395] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:19:35.196 [2024-11-25 23:21:07.321611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.321640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:35.196 [2024-11-25 23:21:07.321648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:35.196 [2024-11-25 23:21:07.321657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.328353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.328379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.196 [2024-11-25 23:21:07.328386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.615 ms 00:19:35.196 [2024-11-25 23:21:07.328395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.328505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.328516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.196 [2024-11-25 23:21:07.328522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:35.196 [2024-11-25 23:21:07.328533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.328557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.328566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.196 [2024-11-25 23:21:07.328574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.196 [2024-11-25 23:21:07.328583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.328613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.196 [2024-11-25 23:21:07.331803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.331827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.196 [2024-11-25 23:21:07.331838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:19:35.196 [2024-11-25 23:21:07.331844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.331890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.331910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.196 [2024-11-25 23:21:07.331918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.196 [2024-11-25 23:21:07.331924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.331951] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:35.196 [2024-11-25 23:21:07.332074] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.196 [2024-11-25 23:21:07.332088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.196 [2024-11-25 23:21:07.332097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.196 [2024-11-25 23:21:07.332107] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.196 [2024-11-25 23:21:07.332128] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.196 [2024-11-25 23:21:07.332137] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.196 [2024-11-25 23:21:07.332144] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.196 [2024-11-25 23:21:07.332152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.332158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.196 [2024-11-25 23:21:07.332165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:19:35.196 [2024-11-25 23:21:07.332171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.332250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.196 [2024-11-25 23:21:07.332257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.196 [2024-11-25 23:21:07.332265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:35.196 [2024-11-25 23:21:07.332270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.196 [2024-11-25 23:21:07.332370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.196 [2024-11-25 23:21:07.332384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.196 [2024-11-25 23:21:07.332392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.196 [2024-11-25 23:21:07.332412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.196 [2024-11-25 23:21:07.332430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.196 [2024-11-25 23:21:07.332441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.196 [2024-11-25 23:21:07.332447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.196 [2024-11-25 23:21:07.332453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.196 [2024-11-25 23:21:07.332458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.196 [2024-11-25 23:21:07.332464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.196 [2024-11-25 23:21:07.332469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.196 [2024-11-25 23:21:07.332482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.196 [2024-11-25 23:21:07.332503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.196 [2024-11-25 23:21:07.332520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.196 [2024-11-25 23:21:07.332539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.196 [2024-11-25 23:21:07.332555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.196 [2024-11-25 23:21:07.332574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.196 [2024-11-25 23:21:07.332585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.196 [2024-11-25 23:21:07.332591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.196 [2024-11-25 23:21:07.332597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.196 [2024-11-25 23:21:07.332604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.196 [2024-11-25 23:21:07.332610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.196 [2024-11-25 23:21:07.332615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.196 [2024-11-25 23:21:07.332626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.196 [2024-11-25 23:21:07.332633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332638] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.196 [2024-11-25 23:21:07.332645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.196 [2024-11-25 23:21:07.332651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.196 [2024-11-25 23:21:07.332658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.196 [2024-11-25 23:21:07.332663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.196 [2024-11-25 23:21:07.332673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.196 [2024-11-25 23:21:07.332679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.196 [2024-11-25 23:21:07.332685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.196 [2024-11-25 23:21:07.332692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.197 [2024-11-25 23:21:07.332699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.197 [2024-11-25 23:21:07.332708] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.197 [2024-11-25 23:21:07.332717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.197 [2024-11-25 23:21:07.332733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.197 [2024-11-25 23:21:07.332739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.197 [2024-11-25 23:21:07.332746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.197 [2024-11-25 23:21:07.332752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.197 [2024-11-25 23:21:07.332758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.197 [2024-11-25 23:21:07.332764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.197 [2024-11-25 23:21:07.332770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.197 [2024-11-25 23:21:07.332776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.197 [2024-11-25 23:21:07.332785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.197 [2024-11-25 23:21:07.332817] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.197 [2024-11-25 23:21:07.332842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.197 [2024-11-25 23:21:07.332855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.197 [2024-11-25 23:21:07.332861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.197 [2024-11-25 23:21:07.332868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.197 [2024-11-25 23:21:07.332874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.197 [2024-11-25 23:21:07.332881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.197 [2024-11-25 23:21:07.332887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:19:35.197 [2024-11-25 23:21:07.332893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.197 [2024-11-25 23:21:07.332974] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:35.197 [2024-11-25 23:21:07.332985] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:37.726 [2024-11-25 23:21:09.835137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.835308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:37.726 [2024-11-25 23:21:09.835374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2502.152 ms 00:19:37.726 [2024-11-25 23:21:09.835402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.863412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.863555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.726 [2024-11-25 23:21:09.863612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.680 ms 00:19:37.726 [2024-11-25 23:21:09.863638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.863788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.863817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.726 [2024-11-25 23:21:09.863883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:37.726 [2024-11-25 23:21:09.863914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.907129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.907344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.726 [2024-11-25 23:21:09.907531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.133 ms 00:19:37.726 [2024-11-25 23:21:09.907577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.907719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.907867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.726 [2024-11-25 23:21:09.907906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.726 [2024-11-25 23:21:09.907939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.908456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.908600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.726 [2024-11-25 23:21:09.908685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:19:37.726 [2024-11-25 23:21:09.908724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.908984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.909123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.726 [2024-11-25 23:21:09.909216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:37.726 [2024-11-25 23:21:09.909257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.927653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.927758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.726 [2024-11-25 23:21:09.927806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.298 ms 00:19:37.726 [2024-11-25 23:21:09.927830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:09.940106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:37.726 [2024-11-25 23:21:09.957464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:09.957567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.726 [2024-11-25 23:21:09.957616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.524 ms 00:19:37.726 [2024-11-25 23:21:09.957639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:10.025253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:10.025386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:37.726 [2024-11-25 23:21:10.025445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.526 ms 00:19:37.726 [2024-11-25 23:21:10.025469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:10.025690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:10.025719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.726 [2024-11-25 23:21:10.025744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:37.726 [2024-11-25 23:21:10.025808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:10.049371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:10.049495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:37.726 [2024-11-25 23:21:10.049556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.506 ms 00:19:37.726 [2024-11-25 23:21:10.049583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:10.074418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:10.074575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:37.726 [2024-11-25 23:21:10.074690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.763 ms 00:19:37.726 [2024-11-25 23:21:10.074760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.726 [2024-11-25 23:21:10.075800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.726 [2024-11-25 23:21:10.075931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.726 [2024-11-25 23:21:10.076017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:19:37.726 [2024-11-25 23:21:10.076035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.150561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.984 [2024-11-25 23:21:10.150694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:37.984 [2024-11-25 23:21:10.150717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.458 ms 00:19:37.984 [2024-11-25 23:21:10.150727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.175413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.984 [2024-11-25 23:21:10.175448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:37.984 [2024-11-25 23:21:10.175461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.587 ms 00:19:37.984 [2024-11-25 23:21:10.175470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.199088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.984 [2024-11-25 23:21:10.199120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:37.984 [2024-11-25 23:21:10.199133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.535 ms 00:19:37.984 [2024-11-25 23:21:10.199141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.222620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.984 [2024-11-25 23:21:10.222669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.984 [2024-11-25 23:21:10.222682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.401 ms 00:19:37.984 [2024-11-25 23:21:10.222690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.222756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.984 [2024-11-25 23:21:10.222766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.984 [2024-11-25 23:21:10.222779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:37.984 [2024-11-25 23:21:10.222787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.222870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.984 [2024-11-25 23:21:10.222880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.984 [2024-11-25 23:21:10.222890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:37.984 [2024-11-25 23:21:10.222898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.984 [2024-11-25 23:21:10.223795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.984 { 00:19:37.984 "name": "ftl0", 00:19:37.984 "uuid": "8a832dbc-fe5a-4899-8dc2-20f67e9df730" 00:19:37.984 } 00:19:37.984 [2024-11-25 23:21:10.226888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2906.290 ms, result 0 00:19:37.984 [2024-11-25 23:21:10.227677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.984 23:21:10 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:37.984 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:37.984 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:37.984 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:37.984 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:37.984 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:37.984 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:38.242 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:38.500 [ 00:19:38.500 { 00:19:38.500 "name": "ftl0", 00:19:38.500 "aliases": [ 00:19:38.500 "8a832dbc-fe5a-4899-8dc2-20f67e9df730" 00:19:38.500 ], 00:19:38.500 "product_name": "FTL disk", 00:19:38.500 "block_size": 4096, 00:19:38.500 "num_blocks": 23592960, 00:19:38.500 "uuid": "8a832dbc-fe5a-4899-8dc2-20f67e9df730", 00:19:38.501 "assigned_rate_limits": { 00:19:38.501 "rw_ios_per_sec": 0, 00:19:38.501 "rw_mbytes_per_sec": 0, 00:19:38.501 "r_mbytes_per_sec": 0, 00:19:38.501 "w_mbytes_per_sec": 0 00:19:38.501 }, 00:19:38.501 "claimed": false, 00:19:38.501 "zoned": false, 00:19:38.501 "supported_io_types": { 00:19:38.501 "read": true, 00:19:38.501 "write": true, 00:19:38.501 "unmap": true, 00:19:38.501 "flush": true, 00:19:38.501 "reset": false, 00:19:38.501 "nvme_admin": false, 00:19:38.501 "nvme_io": false, 00:19:38.501 "nvme_io_md": false, 00:19:38.501 "write_zeroes": true, 00:19:38.501 "zcopy": false, 00:19:38.501 "get_zone_info": false, 00:19:38.501 "zone_management": false, 00:19:38.501 "zone_append": false, 00:19:38.501 "compare": false, 00:19:38.501 "compare_and_write": false, 00:19:38.501 "abort": false, 00:19:38.501 "seek_hole": false, 00:19:38.501 "seek_data": false, 00:19:38.501 "copy": false, 00:19:38.501 "nvme_iov_md": false 00:19:38.501 }, 00:19:38.501 "driver_specific": { 00:19:38.501 "ftl": { 00:19:38.501 "base_bdev": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:38.501 "cache": "nvc0n1p0" 00:19:38.501 } 00:19:38.501 } 00:19:38.501 } 00:19:38.501 ] 00:19:38.501 23:21:10 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:38.501 23:21:10 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:38.501 23:21:10 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:38.501 23:21:10 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:38.501 23:21:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:38.759 23:21:11 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:38.759 { 00:19:38.759 "name": "ftl0", 00:19:38.759 "aliases": [ 00:19:38.759 "8a832dbc-fe5a-4899-8dc2-20f67e9df730" 00:19:38.759 ], 00:19:38.759 "product_name": "FTL disk", 00:19:38.759 "block_size": 4096, 00:19:38.759 "num_blocks": 23592960, 00:19:38.759 "uuid": "8a832dbc-fe5a-4899-8dc2-20f67e9df730", 00:19:38.759 "assigned_rate_limits": { 00:19:38.759 "rw_ios_per_sec": 0, 00:19:38.759 "rw_mbytes_per_sec": 0, 00:19:38.759 "r_mbytes_per_sec": 0, 00:19:38.759 "w_mbytes_per_sec": 0 00:19:38.759 }, 00:19:38.759 "claimed": false, 00:19:38.759 "zoned": false, 00:19:38.759 "supported_io_types": { 00:19:38.759 "read": true, 00:19:38.759 "write": true, 00:19:38.759 "unmap": true, 00:19:38.759 "flush": true, 00:19:38.759 "reset": false, 00:19:38.759 "nvme_admin": false, 00:19:38.759 "nvme_io": false, 00:19:38.759 "nvme_io_md": false, 00:19:38.759 "write_zeroes": true, 00:19:38.759 "zcopy": false, 00:19:38.759 "get_zone_info": false, 00:19:38.759 "zone_management": false, 00:19:38.759 "zone_append": false, 00:19:38.759 "compare": false, 00:19:38.759 "compare_and_write": false, 00:19:38.759 "abort": false, 00:19:38.759 "seek_hole": false, 00:19:38.759 "seek_data": false, 00:19:38.759 "copy": false, 00:19:38.759 "nvme_iov_md": false 00:19:38.759 }, 00:19:38.759 "driver_specific": { 00:19:38.759 "ftl": { 00:19:38.759 "base_bdev": "672523ec-4f95-4e2f-88b6-8b19a1c1ab9f", 00:19:38.759 "cache": "nvc0n1p0" 00:19:38.759 } 00:19:38.759 } 00:19:38.759 } 00:19:38.759 ]' 00:19:38.759 23:21:11 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:38.759 23:21:11 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:38.759 23:21:11 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:39.018 [2024-11-25 23:21:11.275702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.275828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.018 [2024-11-25 23:21:11.275844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.018 [2024-11-25 23:21:11.275855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.275889] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:39.018 [2024-11-25 23:21:11.278145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.278171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.018 [2024-11-25 23:21:11.278185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.241 ms 00:19:39.018 [2024-11-25 23:21:11.278192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.278664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.278678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.018 [2024-11-25 23:21:11.278687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:19:39.018 [2024-11-25 23:21:11.278693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.281474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.281494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.018 [2024-11-25 23:21:11.281503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.754 ms 00:19:39.018 [2024-11-25 23:21:11.281510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.286831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.286933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:39.018 [2024-11-25 23:21:11.286948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.282 ms 00:19:39.018 [2024-11-25 23:21:11.286955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.304808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.304845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.018 [2024-11-25 23:21:11.304858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.787 ms 00:19:39.018 [2024-11-25 23:21:11.304864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.317493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.317598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:39.018 [2024-11-25 23:21:11.317617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.570 ms 00:19:39.018 [2024-11-25 23:21:11.317624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.317805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.317815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:39.018 [2024-11-25 23:21:11.317823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:19:39.018 [2024-11-25 23:21:11.317830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.335713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.335739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:39.018 [2024-11-25 23:21:11.335748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.855 ms 00:19:39.018 [2024-11-25 23:21:11.335754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.018 [2024-11-25 23:21:11.353362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.018 [2024-11-25 23:21:11.353389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:39.019 [2024-11-25 23:21:11.353400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.552 ms 00:19:39.019 [2024-11-25 23:21:11.353406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.019 [2024-11-25 23:21:11.370648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.019 [2024-11-25 23:21:11.370675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:39.019 [2024-11-25 23:21:11.370685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.190 ms 00:19:39.019 [2024-11-25 23:21:11.370690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.278 [2024-11-25 23:21:11.387738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.278 [2024-11-25 23:21:11.387838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:39.278 [2024-11-25 23:21:11.387854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.960 ms 00:19:39.278 [2024-11-25 23:21:11.387859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.278 [2024-11-25 23:21:11.387909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:39.278 [2024-11-25 23:21:11.387923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.387995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:39.278 [2024-11-25 23:21:11.388097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:39.279 [2024-11-25 23:21:11.388633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:39.279 [2024-11-25 23:21:11.388642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:19:39.279 [2024-11-25 23:21:11.388648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:39.279 [2024-11-25 23:21:11.388655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:39.279 [2024-11-25 23:21:11.388660] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:39.279 [2024-11-25 23:21:11.388670] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:39.279 [2024-11-25 23:21:11.388675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:39.279 [2024-11-25 23:21:11.388683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:39.279 [2024-11-25 23:21:11.388689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:39.279 [2024-11-25 23:21:11.388695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:39.279 [2024-11-25 23:21:11.388699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:39.279 [2024-11-25 23:21:11.388706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.279 [2024-11-25 23:21:11.388712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:39.279 [2024-11-25 23:21:11.388719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:19:39.280 [2024-11-25 23:21:11.388725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.398937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.280 [2024-11-25 23:21:11.399034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:39.280 [2024-11-25 23:21:11.399051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.174 ms 00:19:39.280 [2024-11-25 23:21:11.399069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.399374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.280 [2024-11-25 23:21:11.399388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:39.280 [2024-11-25 23:21:11.399396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:39.280 [2024-11-25 23:21:11.399402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.435611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.435641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.280 [2024-11-25 23:21:11.435651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.435658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.435746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.435754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.280 [2024-11-25 23:21:11.435762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.435768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.435831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.435841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.280 [2024-11-25 23:21:11.435851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.435857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.435885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.435892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.280 [2024-11-25 23:21:11.435899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.435906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.501477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.501639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.280 [2024-11-25 23:21:11.501657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.501664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.552805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.552910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.280 [2024-11-25 23:21:11.552954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.552972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.553084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.553106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.280 [2024-11-25 23:21:11.553130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.553145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.553201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.553247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.280 [2024-11-25 23:21:11.553268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.553283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.553395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.553506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.280 [2024-11-25 23:21:11.553524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.553541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.553648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.553670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:39.280 [2024-11-25 23:21:11.553731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.553749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.553806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.553849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.280 [2024-11-25 23:21:11.553928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.553946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.554040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.280 [2024-11-25 23:21:11.554100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.280 [2024-11-25 23:21:11.554175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.280 [2024-11-25 23:21:11.554194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.280 [2024-11-25 23:21:11.554382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.649 ms, result 0 00:19:39.280 true 00:19:39.280 23:21:11 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76347 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76347 ']' 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76347 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76347 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76347' 00:19:39.280 killing process with pid 76347 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76347 00:19:39.280 23:21:11 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76347 00:19:44.554 23:21:16 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:45.940 65536+0 records in 00:19:45.940 65536+0 records out 00:19:45.940 268435456 bytes (268 MB, 256 MiB) copied, 1.07415 s, 250 MB/s 00:19:45.940 23:21:17 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:45.940 [2024-11-25 23:21:17.965665] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:19:45.940 [2024-11-25 23:21:17.965786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76529 ] 00:19:45.940 [2024-11-25 23:21:18.120237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.940 [2024-11-25 23:21:18.218608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.201 [2024-11-25 23:21:18.447917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.201 [2024-11-25 23:21:18.447976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.463 [2024-11-25 23:21:18.604476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.604634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.463 [2024-11-25 23:21:18.604652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.463 [2024-11-25 23:21:18.604660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.606866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.606897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.463 [2024-11-25 23:21:18.606905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:19:46.463 [2024-11-25 23:21:18.606911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.606976] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.463 [2024-11-25 23:21:18.607548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.463 [2024-11-25 23:21:18.607567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.607574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.463 [2024-11-25 23:21:18.607581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:19:46.463 [2024-11-25 23:21:18.607588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.608938] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:46.463 [2024-11-25 23:21:18.619309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.619337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:46.463 [2024-11-25 23:21:18.619347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.372 ms 00:19:46.463 [2024-11-25 23:21:18.619354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.619429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.619439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:46.463 [2024-11-25 23:21:18.619445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:46.463 [2024-11-25 23:21:18.619451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.625860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.625885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.463 [2024-11-25 23:21:18.625892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.378 ms 00:19:46.463 [2024-11-25 23:21:18.625898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.625970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.625978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.463 [2024-11-25 23:21:18.625984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:46.463 [2024-11-25 23:21:18.625991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.626009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.626015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.463 [2024-11-25 23:21:18.626021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.463 [2024-11-25 23:21:18.626028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.626047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:46.463 [2024-11-25 23:21:18.629200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.629333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.463 [2024-11-25 23:21:18.629347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:19:46.463 [2024-11-25 23:21:18.629353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.629385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.629393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.463 [2024-11-25 23:21:18.629400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:46.463 [2024-11-25 23:21:18.629405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.629432] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:46.463 [2024-11-25 23:21:18.629449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:46.463 [2024-11-25 23:21:18.629479] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:46.463 [2024-11-25 23:21:18.629492] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:46.463 [2024-11-25 23:21:18.629574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.463 [2024-11-25 23:21:18.629583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.463 [2024-11-25 23:21:18.629592] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.463 [2024-11-25 23:21:18.629603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.463 [2024-11-25 23:21:18.629610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.463 [2024-11-25 23:21:18.629617] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:46.463 [2024-11-25 23:21:18.629623] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.463 [2024-11-25 23:21:18.629630] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.463 [2024-11-25 23:21:18.629636] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.463 [2024-11-25 23:21:18.629643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.629649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.463 [2024-11-25 23:21:18.629655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:19:46.463 [2024-11-25 23:21:18.629660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.629728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.463 [2024-11-25 23:21:18.629737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.463 [2024-11-25 23:21:18.629743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:46.463 [2024-11-25 23:21:18.629749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.463 [2024-11-25 23:21:18.629828] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.463 [2024-11-25 23:21:18.629837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.463 [2024-11-25 23:21:18.629844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.463 [2024-11-25 23:21:18.629850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.463 [2024-11-25 23:21:18.629857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.463 [2024-11-25 23:21:18.629862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.463 [2024-11-25 23:21:18.629868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:46.463 [2024-11-25 23:21:18.629873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.463 [2024-11-25 23:21:18.629880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:46.463 [2024-11-25 23:21:18.629886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.463 [2024-11-25 23:21:18.629895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.463 [2024-11-25 23:21:18.629906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:46.463 [2024-11-25 23:21:18.629912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.463 [2024-11-25 23:21:18.629918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.463 [2024-11-25 23:21:18.629924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:46.463 [2024-11-25 23:21:18.629929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.463 [2024-11-25 23:21:18.629935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.463 [2024-11-25 23:21:18.629940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:46.464 [2024-11-25 23:21:18.629945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.464 [2024-11-25 23:21:18.629951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.464 [2024-11-25 23:21:18.629957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:46.464 [2024-11-25 23:21:18.629962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.464 [2024-11-25 23:21:18.629967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.464 [2024-11-25 23:21:18.629972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:46.464 [2024-11-25 23:21:18.629977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.464 [2024-11-25 23:21:18.629982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.464 [2024-11-25 23:21:18.629988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:46.464 [2024-11-25 23:21:18.629993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.464 [2024-11-25 23:21:18.629998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.464 [2024-11-25 23:21:18.630003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:46.464 [2024-11-25 23:21:18.630008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.464 [2024-11-25 23:21:18.630013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.464 [2024-11-25 23:21:18.630018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:46.464 [2024-11-25 23:21:18.630023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.464 [2024-11-25 23:21:18.630028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.464 [2024-11-25 23:21:18.630033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:46.464 [2024-11-25 23:21:18.630037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.464 [2024-11-25 23:21:18.630043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.464 [2024-11-25 23:21:18.630048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:46.464 [2024-11-25 23:21:18.630053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.464 [2024-11-25 23:21:18.630076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.464 [2024-11-25 23:21:18.630081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:46.464 [2024-11-25 23:21:18.630087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.464 [2024-11-25 23:21:18.630094] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.464 [2024-11-25 23:21:18.630101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.464 [2024-11-25 23:21:18.630109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.464 [2024-11-25 23:21:18.630115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.464 [2024-11-25 23:21:18.630122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.464 [2024-11-25 23:21:18.630128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.464 [2024-11-25 23:21:18.630133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.464 [2024-11-25 23:21:18.630139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.464 [2024-11-25 23:21:18.630144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.464 [2024-11-25 23:21:18.630149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.464 [2024-11-25 23:21:18.630155] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.464 [2024-11-25 23:21:18.630163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:46.464 [2024-11-25 23:21:18.630177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:46.464 [2024-11-25 23:21:18.630182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:46.464 [2024-11-25 23:21:18.630187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:46.464 [2024-11-25 23:21:18.630193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:46.464 [2024-11-25 23:21:18.630198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:46.464 [2024-11-25 23:21:18.630203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:46.464 [2024-11-25 23:21:18.630210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:46.464 [2024-11-25 23:21:18.630215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:46.464 [2024-11-25 23:21:18.630221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:46.464 [2024-11-25 23:21:18.630248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.464 [2024-11-25 23:21:18.630254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.464 [2024-11-25 23:21:18.630267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.464 [2024-11-25 23:21:18.630273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.464 [2024-11-25 23:21:18.630279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.464 [2024-11-25 23:21:18.630285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.630294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.464 [2024-11-25 23:21:18.630300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:19:46.464 [2024-11-25 23:21:18.630306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.654764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.654795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.464 [2024-11-25 23:21:18.654805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.405 ms 00:19:46.464 [2024-11-25 23:21:18.654812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.654910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.654919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:46.464 [2024-11-25 23:21:18.654926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:46.464 [2024-11-25 23:21:18.654932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.700050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.700088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.464 [2024-11-25 23:21:18.700100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.100 ms 00:19:46.464 [2024-11-25 23:21:18.700107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.700182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.700191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.464 [2024-11-25 23:21:18.700198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:46.464 [2024-11-25 23:21:18.700204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.700590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.700604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.464 [2024-11-25 23:21:18.700612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:19:46.464 [2024-11-25 23:21:18.700624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.700741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.700749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.464 [2024-11-25 23:21:18.700756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:46.464 [2024-11-25 23:21:18.700762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.713093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.713245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.464 [2024-11-25 23:21:18.713259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.315 ms 00:19:46.464 [2024-11-25 23:21:18.713265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.724080] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:46.464 [2024-11-25 23:21:18.724195] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:46.464 [2024-11-25 23:21:18.724209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.724216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:46.464 [2024-11-25 23:21:18.724223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.859 ms 00:19:46.464 [2024-11-25 23:21:18.724229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.743197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.743307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:46.464 [2024-11-25 23:21:18.743321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.912 ms 00:19:46.464 [2024-11-25 23:21:18.743329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.464 [2024-11-25 23:21:18.752714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.464 [2024-11-25 23:21:18.752740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:46.465 [2024-11-25 23:21:18.752748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.330 ms 00:19:46.465 [2024-11-25 23:21:18.752754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.465 [2024-11-25 23:21:18.761860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.465 [2024-11-25 23:21:18.761886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:46.465 [2024-11-25 23:21:18.761894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.063 ms 00:19:46.465 [2024-11-25 23:21:18.761900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.465 [2024-11-25 23:21:18.762400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.465 [2024-11-25 23:21:18.762447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:46.465 [2024-11-25 23:21:18.762455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:19:46.465 [2024-11-25 23:21:18.762461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.465 [2024-11-25 23:21:18.810780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.465 [2024-11-25 23:21:18.810817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:46.465 [2024-11-25 23:21:18.810827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.301 ms 00:19:46.465 [2024-11-25 23:21:18.810834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.465 [2024-11-25 23:21:18.819238] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:46.725 [2024-11-25 23:21:18.833664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.833809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:46.725 [2024-11-25 23:21:18.833823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.753 ms 00:19:46.725 [2024-11-25 23:21:18.833830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.833913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.833924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:46.725 [2024-11-25 23:21:18.833932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:46.725 [2024-11-25 23:21:18.833938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.833982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.833989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:46.725 [2024-11-25 23:21:18.833996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:46.725 [2024-11-25 23:21:18.834002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.834026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.834034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:46.725 [2024-11-25 23:21:18.834041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.725 [2024-11-25 23:21:18.834047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.834096] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:46.725 [2024-11-25 23:21:18.834106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.834113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:46.725 [2024-11-25 23:21:18.834119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:46.725 [2024-11-25 23:21:18.834125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.853129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.853157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:46.725 [2024-11-25 23:21:18.853166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.984 ms 00:19:46.725 [2024-11-25 23:21:18.853173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.853250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.725 [2024-11-25 23:21:18.853259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:46.725 [2024-11-25 23:21:18.853266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:46.725 [2024-11-25 23:21:18.853272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.725 [2024-11-25 23:21:18.854131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.725 [2024-11-25 23:21:18.856477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 249.391 ms, result 0 00:19:46.725 [2024-11-25 23:21:18.857917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.725 [2024-11-25 23:21:18.868721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.678  [2024-11-25T23:21:21.033Z] Copying: 21/256 [MB] (21 MBps) [2024-11-25T23:21:21.975Z] Copying: 41/256 [MB] (20 MBps) [2024-11-25T23:21:22.920Z] Copying: 57/256 [MB] (15 MBps) [2024-11-25T23:21:24.312Z] Copying: 68/256 [MB] (11 MBps) [2024-11-25T23:21:24.959Z] Copying: 88/256 [MB] (19 MBps) [2024-11-25T23:21:25.903Z] Copying: 105/256 [MB] (17 MBps) [2024-11-25T23:21:27.292Z] Copying: 115/256 [MB] (10 MBps) [2024-11-25T23:21:28.235Z] Copying: 126/256 [MB] (10 MBps) [2024-11-25T23:21:29.178Z] Copying: 141/256 [MB] (15 MBps) [2024-11-25T23:21:30.121Z] Copying: 152/256 [MB] (11 MBps) [2024-11-25T23:21:31.067Z] Copying: 164/256 [MB] (11 MBps) [2024-11-25T23:21:32.012Z] Copying: 174/256 [MB] (10 MBps) [2024-11-25T23:21:32.957Z] Copying: 184/256 [MB] (10 MBps) [2024-11-25T23:21:33.902Z] Copying: 195/256 [MB] (10 MBps) [2024-11-25T23:21:35.292Z] Copying: 205/256 [MB] (10 MBps) [2024-11-25T23:21:36.237Z] Copying: 216/256 [MB] (11 MBps) [2024-11-25T23:21:37.182Z] Copying: 227/256 [MB] (10 MBps) [2024-11-25T23:21:38.125Z] Copying: 242936/262144 [kB] (9944 kBps) [2024-11-25T23:21:39.069Z] Copying: 252956/262144 [kB] (10020 kBps) [2024-11-25T23:21:39.069Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-25 23:21:38.776000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.700 [2024-11-25 23:21:38.786899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.787138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.700 [2024-11-25 23:21:38.787167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.700 [2024-11-25 23:21:38.787178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.787217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:06.700 [2024-11-25 23:21:38.790549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.790720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.700 [2024-11-25 23:21:38.790743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:20:06.700 [2024-11-25 23:21:38.790752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.793908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.794088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.700 [2024-11-25 23:21:38.794108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:20:06.700 [2024-11-25 23:21:38.794118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.803010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.803069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.700 [2024-11-25 23:21:38.803089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.869 ms 00:20:06.700 [2024-11-25 23:21:38.803097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.810029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.810208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:06.700 [2024-11-25 23:21:38.810228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.888 ms 00:20:06.700 [2024-11-25 23:21:38.810237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.836088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.836136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.700 [2024-11-25 23:21:38.836149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.782 ms 00:20:06.700 [2024-11-25 23:21:38.836157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.854148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.854205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.700 [2024-11-25 23:21:38.854218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.940 ms 00:20:06.700 [2024-11-25 23:21:38.854230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.854383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.854395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.700 [2024-11-25 23:21:38.854407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:06.700 [2024-11-25 23:21:38.854428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.880167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.880226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:06.700 [2024-11-25 23:21:38.880238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.721 ms 00:20:06.700 [2024-11-25 23:21:38.880247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.905709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.905898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:06.700 [2024-11-25 23:21:38.905917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.412 ms 00:20:06.700 [2024-11-25 23:21:38.905925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.931361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.931416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.700 [2024-11-25 23:21:38.931429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.049 ms 00:20:06.700 [2024-11-25 23:21:38.931437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.956368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.700 [2024-11-25 23:21:38.956561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.700 [2024-11-25 23:21:38.956582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.833 ms 00:20:06.700 [2024-11-25 23:21:38.956590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.700 [2024-11-25 23:21:38.956772] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.700 [2024-11-25 23:21:38.956806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.956994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.700 [2024-11-25 23:21:38.957210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.701 [2024-11-25 23:21:38.957685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.701 [2024-11-25 23:21:38.957694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:20:06.701 [2024-11-25 23:21:38.957704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.701 [2024-11-25 23:21:38.957712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.701 [2024-11-25 23:21:38.957720] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.701 [2024-11-25 23:21:38.957730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.701 [2024-11-25 23:21:38.957738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.701 [2024-11-25 23:21:38.957746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.701 [2024-11-25 23:21:38.957754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.701 [2024-11-25 23:21:38.957761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.701 [2024-11-25 23:21:38.957767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.701 [2024-11-25 23:21:38.957776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.701 [2024-11-25 23:21:38.957785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.701 [2024-11-25 23:21:38.957797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:20:06.701 [2024-11-25 23:21:38.957805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.701 [2024-11-25 23:21:38.972682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.701 [2024-11-25 23:21:38.972726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.701 [2024-11-25 23:21:38.972738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.856 ms 00:20:06.701 [2024-11-25 23:21:38.972746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.701 [2024-11-25 23:21:38.973244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.701 [2024-11-25 23:21:38.973261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.701 [2024-11-25 23:21:38.973272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:20:06.701 [2024-11-25 23:21:38.973280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.701 [2024-11-25 23:21:39.015071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.701 [2024-11-25 23:21:39.015237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.701 [2024-11-25 23:21:39.015255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.701 [2024-11-25 23:21:39.015263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.701 [2024-11-25 23:21:39.015353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.701 [2024-11-25 23:21:39.015362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.701 [2024-11-25 23:21:39.015369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.701 [2024-11-25 23:21:39.015376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.701 [2024-11-25 23:21:39.015425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.701 [2024-11-25 23:21:39.015435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.701 [2024-11-25 23:21:39.015444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.701 [2024-11-25 23:21:39.015450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.701 [2024-11-25 23:21:39.015466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.701 [2024-11-25 23:21:39.015477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.701 [2024-11-25 23:21:39.015485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.701 [2024-11-25 23:21:39.015491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.087826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.087880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.963 [2024-11-25 23:21:39.087891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.087898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.963 [2024-11-25 23:21:39.141212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.963 [2024-11-25 23:21:39.141279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.963 [2024-11-25 23:21:39.141325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.963 [2024-11-25 23:21:39.141425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:06.963 [2024-11-25 23:21:39.141469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.963 [2024-11-25 23:21:39.141530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.963 [2024-11-25 23:21:39.141585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.963 [2024-11-25 23:21:39.141592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.963 [2024-11-25 23:21:39.141600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.963 [2024-11-25 23:21:39.141729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.829 ms, result 0 00:20:07.533 00:20:07.533 00:20:07.533 23:21:39 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76759 00:20:07.533 23:21:39 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76759 00:20:07.533 23:21:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76759 ']' 00:20:07.533 23:21:39 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.533 23:21:39 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:07.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.533 23:21:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.533 23:21:39 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.533 23:21:39 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.533 23:21:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:07.795 [2024-11-25 23:21:39.964031] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:20:07.795 [2024-11-25 23:21:39.964159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76759 ] 00:20:07.795 [2024-11-25 23:21:40.117626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.056 [2024-11-25 23:21:40.210040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.628 23:21:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.628 23:21:40 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:08.628 23:21:40 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:08.891 [2024-11-25 23:21:40.997973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.891 [2024-11-25 23:21:40.998030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.891 [2024-11-25 23:21:41.171218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.171396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:08.891 [2024-11-25 23:21:41.171415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:08.891 [2024-11-25 23:21:41.171422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.173654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.173685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.891 [2024-11-25 23:21:41.173695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:20:08.891 [2024-11-25 23:21:41.173701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.173766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:08.891 [2024-11-25 23:21:41.174422] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:08.891 [2024-11-25 23:21:41.174508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.174550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.891 [2024-11-25 23:21:41.174570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:20:08.891 [2024-11-25 23:21:41.174585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.175896] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:08.891 [2024-11-25 23:21:41.186481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.186588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:08.891 [2024-11-25 23:21:41.186647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.590 ms 00:20:08.891 [2024-11-25 23:21:41.186674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.186762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.186790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:08.891 [2024-11-25 23:21:41.186812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:08.891 [2024-11-25 23:21:41.186821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.192990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.193019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.891 [2024-11-25 23:21:41.193027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:20:08.891 [2024-11-25 23:21:41.193034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.193126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.193136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.891 [2024-11-25 23:21:41.193142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:08.891 [2024-11-25 23:21:41.193153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.193177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.193185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:08.891 [2024-11-25 23:21:41.193192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.891 [2024-11-25 23:21:41.193221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.193239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:08.891 [2024-11-25 23:21:41.196303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.196404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.891 [2024-11-25 23:21:41.196419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:20:08.891 [2024-11-25 23:21:41.196425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.196459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.196466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:08.891 [2024-11-25 23:21:41.196474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:08.891 [2024-11-25 23:21:41.196481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.196499] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:08.891 [2024-11-25 23:21:41.196515] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:08.891 [2024-11-25 23:21:41.196549] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:08.891 [2024-11-25 23:21:41.196562] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:08.891 [2024-11-25 23:21:41.196647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:08.891 [2024-11-25 23:21:41.196656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:08.891 [2024-11-25 23:21:41.196670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:08.891 [2024-11-25 23:21:41.196678] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:08.891 [2024-11-25 23:21:41.196687] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:08.891 [2024-11-25 23:21:41.196695] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:08.891 [2024-11-25 23:21:41.196703] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:08.891 [2024-11-25 23:21:41.196709] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:08.891 [2024-11-25 23:21:41.196718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:08.891 [2024-11-25 23:21:41.196724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.196731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:08.891 [2024-11-25 23:21:41.196737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:20:08.891 [2024-11-25 23:21:41.196744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.196823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.891 [2024-11-25 23:21:41.196832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:08.891 [2024-11-25 23:21:41.196847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:08.891 [2024-11-25 23:21:41.196854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.891 [2024-11-25 23:21:41.196933] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:08.891 [2024-11-25 23:21:41.196944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:08.891 [2024-11-25 23:21:41.196950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.891 [2024-11-25 23:21:41.196958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.891 [2024-11-25 23:21:41.196964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:08.891 [2024-11-25 23:21:41.196971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:08.891 [2024-11-25 23:21:41.196977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:08.891 [2024-11-25 23:21:41.196987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:08.891 [2024-11-25 23:21:41.196993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:08.891 [2024-11-25 23:21:41.197000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.891 [2024-11-25 23:21:41.197005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:08.891 [2024-11-25 23:21:41.197012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:08.891 [2024-11-25 23:21:41.197018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.891 [2024-11-25 23:21:41.197026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:08.891 [2024-11-25 23:21:41.197031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:08.891 [2024-11-25 23:21:41.197038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.891 [2024-11-25 23:21:41.197043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:08.891 [2024-11-25 23:21:41.197050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:08.891 [2024-11-25 23:21:41.197076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.891 [2024-11-25 23:21:41.197083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:08.892 [2024-11-25 23:21:41.197088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.892 [2024-11-25 23:21:41.197100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:08.892 [2024-11-25 23:21:41.197109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.892 [2024-11-25 23:21:41.197120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:08.892 [2024-11-25 23:21:41.197126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.892 [2024-11-25 23:21:41.197138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:08.892 [2024-11-25 23:21:41.197144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.892 [2024-11-25 23:21:41.197156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:08.892 [2024-11-25 23:21:41.197161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.892 [2024-11-25 23:21:41.197175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:08.892 [2024-11-25 23:21:41.197182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:08.892 [2024-11-25 23:21:41.197187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.892 [2024-11-25 23:21:41.197194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:08.892 [2024-11-25 23:21:41.197199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:08.892 [2024-11-25 23:21:41.197207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:08.892 [2024-11-25 23:21:41.197221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:08.892 [2024-11-25 23:21:41.197226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197234] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:08.892 [2024-11-25 23:21:41.197242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:08.892 [2024-11-25 23:21:41.197250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.892 [2024-11-25 23:21:41.197256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.892 [2024-11-25 23:21:41.197264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:08.892 [2024-11-25 23:21:41.197269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:08.892 [2024-11-25 23:21:41.197275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:08.892 [2024-11-25 23:21:41.197281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:08.892 [2024-11-25 23:21:41.197288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:08.892 [2024-11-25 23:21:41.197293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:08.892 [2024-11-25 23:21:41.197301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:08.892 [2024-11-25 23:21:41.197309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:08.892 [2024-11-25 23:21:41.197324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:08.892 [2024-11-25 23:21:41.197332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:08.892 [2024-11-25 23:21:41.197338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:08.892 [2024-11-25 23:21:41.197345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:08.892 [2024-11-25 23:21:41.197350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:08.892 [2024-11-25 23:21:41.197356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:08.892 [2024-11-25 23:21:41.197362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:08.892 [2024-11-25 23:21:41.197368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:08.892 [2024-11-25 23:21:41.197374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:08.892 [2024-11-25 23:21:41.197406] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:08.892 [2024-11-25 23:21:41.197412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:08.892 [2024-11-25 23:21:41.197426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:08.892 [2024-11-25 23:21:41.197433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:08.892 [2024-11-25 23:21:41.197439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:08.892 [2024-11-25 23:21:41.197446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.197452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:08.892 [2024-11-25 23:21:41.197461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:08.892 [2024-11-25 23:21:41.197469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.892 [2024-11-25 23:21:41.221851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.221880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.892 [2024-11-25 23:21:41.221890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.325 ms 00:20:08.892 [2024-11-25 23:21:41.221899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.892 [2024-11-25 23:21:41.221994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.222001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:08.892 [2024-11-25 23:21:41.222010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:08.892 [2024-11-25 23:21:41.222016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.892 [2024-11-25 23:21:41.248497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.248526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.892 [2024-11-25 23:21:41.248536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.461 ms 00:20:08.892 [2024-11-25 23:21:41.248542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.892 [2024-11-25 23:21:41.248589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.248596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.892 [2024-11-25 23:21:41.248604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:08.892 [2024-11-25 23:21:41.248610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.892 [2024-11-25 23:21:41.249009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.249027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.892 [2024-11-25 23:21:41.249038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:20:08.892 [2024-11-25 23:21:41.249044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.892 [2024-11-25 23:21:41.249169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.892 [2024-11-25 23:21:41.249177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.892 [2024-11-25 23:21:41.249186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:08.892 [2024-11-25 23:21:41.249193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.153 [2024-11-25 23:21:41.262814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.153 [2024-11-25 23:21:41.262839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.153 [2024-11-25 23:21:41.262848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.602 ms 00:20:09.153 [2024-11-25 23:21:41.262854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.153 [2024-11-25 23:21:41.273574] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:09.153 [2024-11-25 23:21:41.273614] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:09.153 [2024-11-25 23:21:41.273626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.273633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:09.154 [2024-11-25 23:21:41.273642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.678 ms 00:20:09.154 [2024-11-25 23:21:41.273653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.292544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.292572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:09.154 [2024-11-25 23:21:41.292583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.832 ms 00:20:09.154 [2024-11-25 23:21:41.292589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.301849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.301875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:09.154 [2024-11-25 23:21:41.301887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:20:09.154 [2024-11-25 23:21:41.301892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.310617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.310640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:09.154 [2024-11-25 23:21:41.310650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.681 ms 00:20:09.154 [2024-11-25 23:21:41.310655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.311141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.311154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:09.154 [2024-11-25 23:21:41.311164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:20:09.154 [2024-11-25 23:21:41.311171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.369143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.369181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:09.154 [2024-11-25 23:21:41.369195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.950 ms 00:20:09.154 [2024-11-25 23:21:41.369202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.378236] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:09.154 [2024-11-25 23:21:41.392778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.392947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:09.154 [2024-11-25 23:21:41.392964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.495 ms 00:20:09.154 [2024-11-25 23:21:41.392972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.393041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.393051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:09.154 [2024-11-25 23:21:41.393077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:09.154 [2024-11-25 23:21:41.393085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.393133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.393143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:09.154 [2024-11-25 23:21:41.393149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:09.154 [2024-11-25 23:21:41.393158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.393178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.393186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:09.154 [2024-11-25 23:21:41.393193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:09.154 [2024-11-25 23:21:41.393202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.393230] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:09.154 [2024-11-25 23:21:41.393241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.393251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:09.154 [2024-11-25 23:21:41.393259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:09.154 [2024-11-25 23:21:41.393264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.412465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.412494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:09.154 [2024-11-25 23:21:41.412505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.179 ms 00:20:09.154 [2024-11-25 23:21:41.412512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.412587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.154 [2024-11-25 23:21:41.412596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:09.154 [2024-11-25 23:21:41.412607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:09.154 [2024-11-25 23:21:41.412613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.154 [2024-11-25 23:21:41.413401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:09.154 [2024-11-25 23:21:41.415648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 241.924 ms, result 0 00:20:09.154 [2024-11-25 23:21:41.417533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:09.154 Some configs were skipped because the RPC state that can call them passed over. 00:20:09.154 23:21:41 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:09.414 [2024-11-25 23:21:41.642780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.415 [2024-11-25 23:21:41.642890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:09.415 [2024-11-25 23:21:41.642934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.467 ms 00:20:09.415 [2024-11-25 23:21:41.642954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.415 [2024-11-25 23:21:41.642992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.678 ms, result 0 00:20:09.415 true 00:20:09.415 23:21:41 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:09.675 [2024-11-25 23:21:41.842842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.675 [2024-11-25 23:21:41.842935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:09.675 [2024-11-25 23:21:41.842976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:20:09.675 [2024-11-25 23:21:41.842994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.675 [2024-11-25 23:21:41.843034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.523 ms, result 0 00:20:09.675 true 00:20:09.675 23:21:41 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76759 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76759 ']' 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76759 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76759 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.675 killing process with pid 76759 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76759' 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76759 00:20:09.675 23:21:41 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76759 00:20:10.250 [2024-11-25 23:21:42.441401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.441453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:10.250 [2024-11-25 23:21:42.441465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.250 [2024-11-25 23:21:42.441473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.441494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:10.250 [2024-11-25 23:21:42.443650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.443677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:10.250 [2024-11-25 23:21:42.443688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:20:10.250 [2024-11-25 23:21:42.443695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.443927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.443935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:10.250 [2024-11-25 23:21:42.443944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:10.250 [2024-11-25 23:21:42.443950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.447603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.447772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:10.250 [2024-11-25 23:21:42.447791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.635 ms 00:20:10.250 [2024-11-25 23:21:42.447798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.453283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.453308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:10.250 [2024-11-25 23:21:42.453318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.448 ms 00:20:10.250 [2024-11-25 23:21:42.453324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.461557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.461588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:10.250 [2024-11-25 23:21:42.461600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.184 ms 00:20:10.250 [2024-11-25 23:21:42.461606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.469024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.469051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:10.250 [2024-11-25 23:21:42.469070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.386 ms 00:20:10.250 [2024-11-25 23:21:42.469077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.469207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.469216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:10.250 [2024-11-25 23:21:42.469225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:10.250 [2024-11-25 23:21:42.469231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.478008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.478032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:10.250 [2024-11-25 23:21:42.478041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.760 ms 00:20:10.250 [2024-11-25 23:21:42.478046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.486243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.486266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:10.250 [2024-11-25 23:21:42.486276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.157 ms 00:20:10.250 [2024-11-25 23:21:42.486282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.493921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.493944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.250 [2024-11-25 23:21:42.493953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.607 ms 00:20:10.250 [2024-11-25 23:21:42.493958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.501539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.250 [2024-11-25 23:21:42.501562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:10.250 [2024-11-25 23:21:42.501570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.527 ms 00:20:10.250 [2024-11-25 23:21:42.501576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.250 [2024-11-25 23:21:42.501621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:10.250 [2024-11-25 23:21:42.501634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:10.250 [2024-11-25 23:21:42.501705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.501998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:10.251 [2024-11-25 23:21:42.502319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:10.252 [2024-11-25 23:21:42.502326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:10.252 [2024-11-25 23:21:42.502343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:10.252 [2024-11-25 23:21:42.502355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:20:10.252 [2024-11-25 23:21:42.502363] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:10.252 [2024-11-25 23:21:42.502370] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:10.252 [2024-11-25 23:21:42.502376] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:10.252 [2024-11-25 23:21:42.502383] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:10.252 [2024-11-25 23:21:42.502388] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:10.252 [2024-11-25 23:21:42.502396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:10.252 [2024-11-25 23:21:42.502403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:10.252 [2024-11-25 23:21:42.502409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:10.252 [2024-11-25 23:21:42.502414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:10.252 [2024-11-25 23:21:42.502421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.252 [2024-11-25 23:21:42.502426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:10.252 [2024-11-25 23:21:42.502435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:20:10.252 [2024-11-25 23:21:42.502440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.252 [2024-11-25 23:21:42.512670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.252 [2024-11-25 23:21:42.512783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:10.252 [2024-11-25 23:21:42.512799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.210 ms 00:20:10.252 [2024-11-25 23:21:42.512805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.252 [2024-11-25 23:21:42.513146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.252 [2024-11-25 23:21:42.513158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:10.252 [2024-11-25 23:21:42.513168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:10.252 [2024-11-25 23:21:42.513175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.252 [2024-11-25 23:21:42.550037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.252 [2024-11-25 23:21:42.550071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.252 [2024-11-25 23:21:42.550081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.252 [2024-11-25 23:21:42.550088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.252 [2024-11-25 23:21:42.550173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.252 [2024-11-25 23:21:42.550180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.252 [2024-11-25 23:21:42.550191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.252 [2024-11-25 23:21:42.550198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.252 [2024-11-25 23:21:42.550237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.252 [2024-11-25 23:21:42.550245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.252 [2024-11-25 23:21:42.550255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.252 [2024-11-25 23:21:42.550262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.252 [2024-11-25 23:21:42.550278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.252 [2024-11-25 23:21:42.550285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.252 [2024-11-25 23:21:42.550292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.252 [2024-11-25 23:21:42.550300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.513 [2024-11-25 23:21:42.614116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.513 [2024-11-25 23:21:42.614269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.513 [2024-11-25 23:21:42.614286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.513 [2024-11-25 23:21:42.614293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.514 [2024-11-25 23:21:42.666189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.514 [2024-11-25 23:21:42.666289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.514 [2024-11-25 23:21:42.666338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.514 [2024-11-25 23:21:42.666440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.514 [2024-11-25 23:21:42.666490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.514 [2024-11-25 23:21:42.666551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.514 [2024-11-25 23:21:42.666607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.514 [2024-11-25 23:21:42.666615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.514 [2024-11-25 23:21:42.666621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.514 [2024-11-25 23:21:42.666748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 225.328 ms, result 0 00:20:11.085 23:21:43 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:11.085 23:21:43 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:11.085 [2024-11-25 23:21:43.278771] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:20:11.085 [2024-11-25 23:21:43.279035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76806 ] 00:20:11.085 [2024-11-25 23:21:43.435193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.346 [2024-11-25 23:21:43.523065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.609 [2024-11-25 23:21:43.750125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.609 [2024-11-25 23:21:43.750337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.609 [2024-11-25 23:21:43.907901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.907937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.609 [2024-11-25 23:21:43.907950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.609 [2024-11-25 23:21:43.907957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.914483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.914583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.609 [2024-11-25 23:21:43.914618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.506 ms 00:20:11.609 [2024-11-25 23:21:43.914641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.915115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.609 [2024-11-25 23:21:43.917447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.609 [2024-11-25 23:21:43.917773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.917805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.609 [2024-11-25 23:21:43.917832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:20:11.609 [2024-11-25 23:21:43.917853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.920187] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:11.609 [2024-11-25 23:21:43.933835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.933869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:11.609 [2024-11-25 23:21:43.933881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.659 ms 00:20:11.609 [2024-11-25 23:21:43.933889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.933962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.933973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:11.609 [2024-11-25 23:21:43.933983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:11.609 [2024-11-25 23:21:43.933990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.940786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.940934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.609 [2024-11-25 23:21:43.940949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.755 ms 00:20:11.609 [2024-11-25 23:21:43.940957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.941084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.941101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.609 [2024-11-25 23:21:43.941110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:11.609 [2024-11-25 23:21:43.941118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.941146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.941154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.609 [2024-11-25 23:21:43.941162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:11.609 [2024-11-25 23:21:43.941170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.941190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:11.609 [2024-11-25 23:21:43.944726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.944754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.609 [2024-11-25 23:21:43.944763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.541 ms 00:20:11.609 [2024-11-25 23:21:43.944771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.944805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.609 [2024-11-25 23:21:43.944814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.609 [2024-11-25 23:21:43.944822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:11.609 [2024-11-25 23:21:43.944829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.609 [2024-11-25 23:21:43.944868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:11.609 [2024-11-25 23:21:43.944889] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:11.610 [2024-11-25 23:21:43.944925] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:11.610 [2024-11-25 23:21:43.944941] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:11.610 [2024-11-25 23:21:43.945046] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.610 [2024-11-25 23:21:43.945070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.610 [2024-11-25 23:21:43.945082] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.610 [2024-11-25 23:21:43.945095] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945104] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:11.610 [2024-11-25 23:21:43.945120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.610 [2024-11-25 23:21:43.945128] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.610 [2024-11-25 23:21:43.945135] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.610 [2024-11-25 23:21:43.945143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.610 [2024-11-25 23:21:43.945151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.610 [2024-11-25 23:21:43.945159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:11.610 [2024-11-25 23:21:43.945166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.610 [2024-11-25 23:21:43.945268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.610 [2024-11-25 23:21:43.945281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.610 [2024-11-25 23:21:43.945289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:11.610 [2024-11-25 23:21:43.945296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.610 [2024-11-25 23:21:43.945397] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.610 [2024-11-25 23:21:43.945408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.610 [2024-11-25 23:21:43.945416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.610 [2024-11-25 23:21:43.945440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.610 [2024-11-25 23:21:43.945462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.610 [2024-11-25 23:21:43.945476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.610 [2024-11-25 23:21:43.945492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:11.610 [2024-11-25 23:21:43.945499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.610 [2024-11-25 23:21:43.945506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.610 [2024-11-25 23:21:43.945513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:11.610 [2024-11-25 23:21:43.945519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.610 [2024-11-25 23:21:43.945533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.610 [2024-11-25 23:21:43.945554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.610 [2024-11-25 23:21:43.945574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.610 [2024-11-25 23:21:43.945594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.610 [2024-11-25 23:21:43.945614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.610 [2024-11-25 23:21:43.945634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.610 [2024-11-25 23:21:43.945646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.610 [2024-11-25 23:21:43.945652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:11.610 [2024-11-25 23:21:43.945659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.610 [2024-11-25 23:21:43.945665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.610 [2024-11-25 23:21:43.945672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:11.610 [2024-11-25 23:21:43.945679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.610 [2024-11-25 23:21:43.945692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:11.610 [2024-11-25 23:21:43.945698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945707] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.610 [2024-11-25 23:21:43.945715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.610 [2024-11-25 23:21:43.945725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.610 [2024-11-25 23:21:43.945740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.610 [2024-11-25 23:21:43.945748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.610 [2024-11-25 23:21:43.945754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.610 [2024-11-25 23:21:43.945762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.610 [2024-11-25 23:21:43.945769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.610 [2024-11-25 23:21:43.945777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.610 [2024-11-25 23:21:43.945786] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.610 [2024-11-25 23:21:43.945795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:11.610 [2024-11-25 23:21:43.945810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:11.610 [2024-11-25 23:21:43.945817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:11.610 [2024-11-25 23:21:43.945824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:11.610 [2024-11-25 23:21:43.945832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:11.610 [2024-11-25 23:21:43.945839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:11.610 [2024-11-25 23:21:43.945846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:11.610 [2024-11-25 23:21:43.945853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:11.610 [2024-11-25 23:21:43.945860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:11.610 [2024-11-25 23:21:43.945869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:11.610 [2024-11-25 23:21:43.945908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.610 [2024-11-25 23:21:43.945916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.610 [2024-11-25 23:21:43.945932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.610 [2024-11-25 23:21:43.945938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.610 [2024-11-25 23:21:43.945945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.610 [2024-11-25 23:21:43.945953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.610 [2024-11-25 23:21:43.945964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.610 [2024-11-25 23:21:43.945971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:20:11.611 [2024-11-25 23:21:43.945978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:43.975525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:43.975663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.873 [2024-11-25 23:21:43.975679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.494 ms 00:20:11.873 [2024-11-25 23:21:43.975688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:43.975814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:43.975824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:11.873 [2024-11-25 23:21:43.975833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:11.873 [2024-11-25 23:21:43.975841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.019561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.019711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.873 [2024-11-25 23:21:44.019734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.699 ms 00:20:11.873 [2024-11-25 23:21:44.019744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.019839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.019851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.873 [2024-11-25 23:21:44.019860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:11.873 [2024-11-25 23:21:44.019867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.020347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.020372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.873 [2024-11-25 23:21:44.020382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:20:11.873 [2024-11-25 23:21:44.020396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.020541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.020551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.873 [2024-11-25 23:21:44.020560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:11.873 [2024-11-25 23:21:44.020567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.035814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.035848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.873 [2024-11-25 23:21:44.035858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.225 ms 00:20:11.873 [2024-11-25 23:21:44.035866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.049309] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:11.873 [2024-11-25 23:21:44.049344] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:11.873 [2024-11-25 23:21:44.049357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.049365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:11.873 [2024-11-25 23:21:44.049374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.390 ms 00:20:11.873 [2024-11-25 23:21:44.049382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.074265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.074403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:11.873 [2024-11-25 23:21:44.074420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.808 ms 00:20:11.873 [2024-11-25 23:21:44.074429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.086425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.086457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:11.873 [2024-11-25 23:21:44.086467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.925 ms 00:20:11.873 [2024-11-25 23:21:44.086474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.098232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.098262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:11.873 [2024-11-25 23:21:44.098272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.688 ms 00:20:11.873 [2024-11-25 23:21:44.098279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.098905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.098925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:11.873 [2024-11-25 23:21:44.098935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:20:11.873 [2024-11-25 23:21:44.098943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.160746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.160796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:11.873 [2024-11-25 23:21:44.160809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.779 ms 00:20:11.873 [2024-11-25 23:21:44.160818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.171712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:11.873 [2024-11-25 23:21:44.190037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.190089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:11.873 [2024-11-25 23:21:44.190100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.117 ms 00:20:11.873 [2024-11-25 23:21:44.190113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.190196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.190207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:11.873 [2024-11-25 23:21:44.190216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:11.873 [2024-11-25 23:21:44.190224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.190279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.190290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:11.873 [2024-11-25 23:21:44.190298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:11.873 [2024-11-25 23:21:44.190310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.190333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.190342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:11.873 [2024-11-25 23:21:44.190350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:11.873 [2024-11-25 23:21:44.190358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.190393] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:11.873 [2024-11-25 23:21:44.190405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.190413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:11.873 [2024-11-25 23:21:44.190421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:11.873 [2024-11-25 23:21:44.190429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.214645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.873 [2024-11-25 23:21:44.214683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:11.873 [2024-11-25 23:21:44.214694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.197 ms 00:20:11.873 [2024-11-25 23:21:44.214703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.873 [2024-11-25 23:21:44.214804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.874 [2024-11-25 23:21:44.214815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:11.874 [2024-11-25 23:21:44.214825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:11.874 [2024-11-25 23:21:44.214833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.874 [2024-11-25 23:21:44.216235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.874 [2024-11-25 23:21:44.219296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.978 ms, result 0 00:20:11.874 [2024-11-25 23:21:44.220610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.874 [2024-11-25 23:21:44.233928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.264  [2024-11-25T23:21:46.573Z] Copying: 13/256 [MB] (13 MBps) [2024-11-25T23:21:47.518Z] Copying: 31/256 [MB] (17 MBps) [2024-11-25T23:21:48.462Z] Copying: 44/256 [MB] (12 MBps) [2024-11-25T23:21:49.405Z] Copying: 56/256 [MB] (12 MBps) [2024-11-25T23:21:50.348Z] Copying: 67/256 [MB] (10 MBps) [2024-11-25T23:21:51.293Z] Copying: 78/256 [MB] (11 MBps) [2024-11-25T23:21:52.682Z] Copying: 90/256 [MB] (11 MBps) [2024-11-25T23:21:53.288Z] Copying: 106/256 [MB] (15 MBps) [2024-11-25T23:21:54.677Z] Copying: 123/256 [MB] (17 MBps) [2024-11-25T23:21:55.250Z] Copying: 142/256 [MB] (19 MBps) [2024-11-25T23:21:56.634Z] Copying: 160/256 [MB] (18 MBps) [2024-11-25T23:21:57.581Z] Copying: 179/256 [MB] (18 MBps) [2024-11-25T23:21:58.526Z] Copying: 193776/262144 [kB] (10148 kBps) [2024-11-25T23:21:59.470Z] Copying: 199/256 [MB] (10 MBps) [2024-11-25T23:22:00.413Z] Copying: 210/256 [MB] (11 MBps) [2024-11-25T23:22:01.357Z] Copying: 222/256 [MB] (11 MBps) [2024-11-25T23:22:02.301Z] Copying: 234/256 [MB] (11 MBps) [2024-11-25T23:22:03.247Z] Copying: 245/256 [MB] (11 MBps) [2024-11-25T23:22:03.510Z] Copying: 255/256 [MB] (10 MBps) [2024-11-25T23:22:03.510Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-25 23:22:03.262094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.141 [2024-11-25 23:22:03.273002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.273087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.141 [2024-11-25 23:22:03.273106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.141 [2024-11-25 23:22:03.273125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.273151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.141 [2024-11-25 23:22:03.276371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.276412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.141 [2024-11-25 23:22:03.276424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:20:31.141 [2024-11-25 23:22:03.276433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.276705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.276717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.141 [2024-11-25 23:22:03.276727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:20:31.141 [2024-11-25 23:22:03.276736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.280465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.280499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.141 [2024-11-25 23:22:03.280509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:20:31.141 [2024-11-25 23:22:03.280518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.287427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.287667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.141 [2024-11-25 23:22:03.287690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:20:31.141 [2024-11-25 23:22:03.287699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.313119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.313167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.141 [2024-11-25 23:22:03.313180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.346 ms 00:20:31.141 [2024-11-25 23:22:03.313189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.330091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.330292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.141 [2024-11-25 23:22:03.330324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.852 ms 00:20:31.141 [2024-11-25 23:22:03.330334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.330712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.330739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.141 [2024-11-25 23:22:03.330763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:31.141 [2024-11-25 23:22:03.330773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.356860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.356906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.141 [2024-11-25 23:22:03.356919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.070 ms 00:20:31.141 [2024-11-25 23:22:03.356927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.381991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.382035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.141 [2024-11-25 23:22:03.382046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:20:31.141 [2024-11-25 23:22:03.382053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.406945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.406988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.141 [2024-11-25 23:22:03.406999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.630 ms 00:20:31.141 [2024-11-25 23:22:03.407006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.431443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.141 [2024-11-25 23:22:03.431486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.141 [2024-11-25 23:22:03.431498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.343 ms 00:20:31.141 [2024-11-25 23:22:03.431505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.141 [2024-11-25 23:22:03.431578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.141 [2024-11-25 23:22:03.431596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.141 [2024-11-25 23:22:03.431608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.141 [2024-11-25 23:22:03.431616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.141 [2024-11-25 23:22:03.431625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.141 [2024-11-25 23:22:03.431633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.141 [2024-11-25 23:22:03.431640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.141 [2024-11-25 23:22:03.431648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.431993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.142 [2024-11-25 23:22:03.432187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.143 [2024-11-25 23:22:03.432460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.143 [2024-11-25 23:22:03.432469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:20:31.143 [2024-11-25 23:22:03.432478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.143 [2024-11-25 23:22:03.432488] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.143 [2024-11-25 23:22:03.432497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.143 [2024-11-25 23:22:03.432506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.143 [2024-11-25 23:22:03.432513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.143 [2024-11-25 23:22:03.432521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.143 [2024-11-25 23:22:03.432529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.143 [2024-11-25 23:22:03.432535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.143 [2024-11-25 23:22:03.432541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.143 [2024-11-25 23:22:03.432550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.143 [2024-11-25 23:22:03.432562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.143 [2024-11-25 23:22:03.432571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:20:31.143 [2024-11-25 23:22:03.432579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.143 [2024-11-25 23:22:03.447508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.143 [2024-11-25 23:22:03.447695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.143 [2024-11-25 23:22:03.447714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.898 ms 00:20:31.143 [2024-11-25 23:22:03.447724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.143 [2024-11-25 23:22:03.448201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.143 [2024-11-25 23:22:03.448217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.143 [2024-11-25 23:22:03.448228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:20:31.143 [2024-11-25 23:22:03.448237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.143 [2024-11-25 23:22:03.490222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.143 [2024-11-25 23:22:03.490390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.143 [2024-11-25 23:22:03.490410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.143 [2024-11-25 23:22:03.490418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.143 [2024-11-25 23:22:03.490511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.143 [2024-11-25 23:22:03.490521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.143 [2024-11-25 23:22:03.490529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.143 [2024-11-25 23:22:03.490538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.143 [2024-11-25 23:22:03.490597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.143 [2024-11-25 23:22:03.490610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.143 [2024-11-25 23:22:03.490619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.143 [2024-11-25 23:22:03.490627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.143 [2024-11-25 23:22:03.490650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.143 [2024-11-25 23:22:03.490660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.143 [2024-11-25 23:22:03.490668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.143 [2024-11-25 23:22:03.490676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.583221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.583278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.405 [2024-11-25 23:22:03.583294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.583303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.657541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.657823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.405 [2024-11-25 23:22:03.657845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.657853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.657928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.657938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.405 [2024-11-25 23:22:03.657948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.657957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.657993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.658003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.405 [2024-11-25 23:22:03.658018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.658028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.658175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.658190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.405 [2024-11-25 23:22:03.658199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.658208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.658249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.658262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.405 [2024-11-25 23:22:03.658276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.658285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.658338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.658349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.405 [2024-11-25 23:22:03.658359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.658368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.658430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.405 [2024-11-25 23:22:03.658443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.405 [2024-11-25 23:22:03.658458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.405 [2024-11-25 23:22:03.658467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.405 [2024-11-25 23:22:03.658655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.631 ms, result 0 00:20:31.978 00:20:31.978 00:20:31.978 23:22:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:31.978 23:22:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:32.922 23:22:05 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:33.183 [2024-11-25 23:22:05.311540] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:20:33.183 [2024-11-25 23:22:05.311879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77039 ] 00:20:33.183 [2024-11-25 23:22:05.471527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.444 [2024-11-25 23:22:05.561436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.444 [2024-11-25 23:22:05.788119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.444 [2024-11-25 23:22:05.788172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:33.706 [2024-11-25 23:22:05.944447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.944616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.706 [2024-11-25 23:22:05.944634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.706 [2024-11-25 23:22:05.944641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.946883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.946913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.706 [2024-11-25 23:22:05.946922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:20:33.706 [2024-11-25 23:22:05.946928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.946991] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.706 [2024-11-25 23:22:05.947561] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.706 [2024-11-25 23:22:05.947574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.947581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.706 [2024-11-25 23:22:05.947589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:20:33.706 [2024-11-25 23:22:05.947595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.948897] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:33.706 [2024-11-25 23:22:05.959338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.959364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:33.706 [2024-11-25 23:22:05.959373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.443 ms 00:20:33.706 [2024-11-25 23:22:05.959380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.959451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.959460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:33.706 [2024-11-25 23:22:05.959467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:33.706 [2024-11-25 23:22:05.959473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.965746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.965772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.706 [2024-11-25 23:22:05.965779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:20:33.706 [2024-11-25 23:22:05.965785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.965857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.965865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.706 [2024-11-25 23:22:05.965871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:33.706 [2024-11-25 23:22:05.965878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.965898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.965905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.706 [2024-11-25 23:22:05.965911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:33.706 [2024-11-25 23:22:05.965917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.965936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:33.706 [2024-11-25 23:22:05.968967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.968991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.706 [2024-11-25 23:22:05.968998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:20:33.706 [2024-11-25 23:22:05.969004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.969033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.969040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.706 [2024-11-25 23:22:05.969046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:33.706 [2024-11-25 23:22:05.969052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.969088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:33.706 [2024-11-25 23:22:05.969105] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:33.706 [2024-11-25 23:22:05.969135] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:33.706 [2024-11-25 23:22:05.969148] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:33.706 [2024-11-25 23:22:05.969230] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:33.706 [2024-11-25 23:22:05.969238] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.706 [2024-11-25 23:22:05.969246] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:33.706 [2024-11-25 23:22:05.969258] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.706 [2024-11-25 23:22:05.969265] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.706 [2024-11-25 23:22:05.969271] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:33.706 [2024-11-25 23:22:05.969277] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.706 [2024-11-25 23:22:05.969283] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:33.706 [2024-11-25 23:22:05.969289] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:33.706 [2024-11-25 23:22:05.969296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.969301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.706 [2024-11-25 23:22:05.969308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:33.706 [2024-11-25 23:22:05.969314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.969381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.706 [2024-11-25 23:22:05.969391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.706 [2024-11-25 23:22:05.969397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:33.706 [2024-11-25 23:22:05.969403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.706 [2024-11-25 23:22:05.969490] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.706 [2024-11-25 23:22:05.969499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.706 [2024-11-25 23:22:05.969507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.706 [2024-11-25 23:22:05.969514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.706 [2024-11-25 23:22:05.969520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.706 [2024-11-25 23:22:05.969525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.706 [2024-11-25 23:22:05.969531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:33.706 [2024-11-25 23:22:05.969537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.706 [2024-11-25 23:22:05.969543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:33.706 [2024-11-25 23:22:05.969548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.706 [2024-11-25 23:22:05.969554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.706 [2024-11-25 23:22:05.969567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:33.706 [2024-11-25 23:22:05.969573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.706 [2024-11-25 23:22:05.969579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.706 [2024-11-25 23:22:05.969584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:33.706 [2024-11-25 23:22:05.969589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.706 [2024-11-25 23:22:05.969595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.706 [2024-11-25 23:22:05.969600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:33.706 [2024-11-25 23:22:05.969606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.706 [2024-11-25 23:22:05.969611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.706 [2024-11-25 23:22:05.969616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.707 [2024-11-25 23:22:05.969626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.707 [2024-11-25 23:22:05.969632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.707 [2024-11-25 23:22:05.969642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.707 [2024-11-25 23:22:05.969647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.707 [2024-11-25 23:22:05.969657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.707 [2024-11-25 23:22:05.969662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.707 [2024-11-25 23:22:05.969672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.707 [2024-11-25 23:22:05.969677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.707 [2024-11-25 23:22:05.969687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.707 [2024-11-25 23:22:05.969692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:33.707 [2024-11-25 23:22:05.969697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.707 [2024-11-25 23:22:05.969701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:33.707 [2024-11-25 23:22:05.969707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:33.707 [2024-11-25 23:22:05.969713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:33.707 [2024-11-25 23:22:05.969722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:33.707 [2024-11-25 23:22:05.969728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969733] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.707 [2024-11-25 23:22:05.969740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.707 [2024-11-25 23:22:05.969749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.707 [2024-11-25 23:22:05.969755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.707 [2024-11-25 23:22:05.969761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.707 [2024-11-25 23:22:05.969767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.707 [2024-11-25 23:22:05.969772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.707 [2024-11-25 23:22:05.969777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.707 [2024-11-25 23:22:05.969782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.707 [2024-11-25 23:22:05.969787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.707 [2024-11-25 23:22:05.969794] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.707 [2024-11-25 23:22:05.969801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:33.707 [2024-11-25 23:22:05.969813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:33.707 [2024-11-25 23:22:05.969818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:33.707 [2024-11-25 23:22:05.969824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:33.707 [2024-11-25 23:22:05.969829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:33.707 [2024-11-25 23:22:05.969834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:33.707 [2024-11-25 23:22:05.969839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:33.707 [2024-11-25 23:22:05.969844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:33.707 [2024-11-25 23:22:05.969849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:33.707 [2024-11-25 23:22:05.969855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:33.707 [2024-11-25 23:22:05.969881] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.707 [2024-11-25 23:22:05.969887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.707 [2024-11-25 23:22:05.969899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.707 [2024-11-25 23:22:05.969904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.707 [2024-11-25 23:22:05.969910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.707 [2024-11-25 23:22:05.969917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:05.969925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.707 [2024-11-25 23:22:05.969931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:20:33.707 [2024-11-25 23:22:05.969939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:05.994432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:05.994557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.707 [2024-11-25 23:22:05.994603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.441 ms 00:20:33.707 [2024-11-25 23:22:05.994621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:05.994730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:05.994751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:33.707 [2024-11-25 23:22:05.994767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:33.707 [2024-11-25 23:22:05.994781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:06.037458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:06.037587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.707 [2024-11-25 23:22:06.037643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.651 ms 00:20:33.707 [2024-11-25 23:22:06.037662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:06.037748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:06.037772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.707 [2024-11-25 23:22:06.037788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:33.707 [2024-11-25 23:22:06.037852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:06.038276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:06.038315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.707 [2024-11-25 23:22:06.038491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:20:33.707 [2024-11-25 23:22:06.038522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:06.038647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:06.038665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.707 [2024-11-25 23:22:06.038681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:33.707 [2024-11-25 23:22:06.038695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:06.050938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:06.051026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.707 [2024-11-25 23:22:06.051077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.217 ms 00:20:33.707 [2024-11-25 23:22:06.051096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.707 [2024-11-25 23:22:06.061835] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:33.707 [2024-11-25 23:22:06.061933] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:33.707 [2024-11-25 23:22:06.061978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.707 [2024-11-25 23:22:06.061994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:33.707 [2024-11-25 23:22:06.062010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.793 ms 00:20:33.707 [2024-11-25 23:22:06.062024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.080952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.081067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:33.970 [2024-11-25 23:22:06.081081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.790 ms 00:20:33.970 [2024-11-25 23:22:06.081089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.090358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.090384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:33.970 [2024-11-25 23:22:06.090392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.210 ms 00:20:33.970 [2024-11-25 23:22:06.090398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.099453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.099478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:33.970 [2024-11-25 23:22:06.099485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.011 ms 00:20:33.970 [2024-11-25 23:22:06.099491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.099963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.099974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:33.970 [2024-11-25 23:22:06.099981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:20:33.970 [2024-11-25 23:22:06.099988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.148461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.148494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:33.970 [2024-11-25 23:22:06.148504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.455 ms 00:20:33.970 [2024-11-25 23:22:06.148511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.156499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:33.970 [2024-11-25 23:22:06.171120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.171162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:33.970 [2024-11-25 23:22:06.171172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.537 ms 00:20:33.970 [2024-11-25 23:22:06.171182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.171254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.171264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:33.970 [2024-11-25 23:22:06.171271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:33.970 [2024-11-25 23:22:06.171278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.171320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.171327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:33.970 [2024-11-25 23:22:06.171334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:33.970 [2024-11-25 23:22:06.171343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.171366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.171374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:33.970 [2024-11-25 23:22:06.171381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:33.970 [2024-11-25 23:22:06.171387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.171415] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:33.970 [2024-11-25 23:22:06.171423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.171429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:33.970 [2024-11-25 23:22:06.171436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:33.970 [2024-11-25 23:22:06.171443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.190286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.190419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:33.970 [2024-11-25 23:22:06.190434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.828 ms 00:20:33.970 [2024-11-25 23:22:06.190441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.190514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.970 [2024-11-25 23:22:06.190522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:33.970 [2024-11-25 23:22:06.190529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:33.970 [2024-11-25 23:22:06.190536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.970 [2024-11-25 23:22:06.191303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.970 [2024-11-25 23:22:06.193677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 246.586 ms, result 0 00:20:33.970 [2024-11-25 23:22:06.194821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.970 [2024-11-25 23:22:06.205670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.232  [2024-11-25T23:22:06.601Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-11-25 23:22:06.566164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.232 [2024-11-25 23:22:06.572459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.232 [2024-11-25 23:22:06.572485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:34.232 [2024-11-25 23:22:06.572494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:34.232 [2024-11-25 23:22:06.572505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.232 [2024-11-25 23:22:06.572521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:34.232 [2024-11-25 23:22:06.574704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.232 [2024-11-25 23:22:06.574726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:34.232 [2024-11-25 23:22:06.574735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:20:34.232 [2024-11-25 23:22:06.574741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.232 [2024-11-25 23:22:06.577017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.232 [2024-11-25 23:22:06.577042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:34.232 [2024-11-25 23:22:06.577050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:20:34.232 [2024-11-25 23:22:06.577064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.233 [2024-11-25 23:22:06.580454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.233 [2024-11-25 23:22:06.580478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:34.233 [2024-11-25 23:22:06.580486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:20:34.233 [2024-11-25 23:22:06.580492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.233 [2024-11-25 23:22:06.585774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.233 [2024-11-25 23:22:06.585795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:34.233 [2024-11-25 23:22:06.585803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.262 ms 00:20:34.233 [2024-11-25 23:22:06.585810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.603979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.604005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:34.495 [2024-11-25 23:22:06.604013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.127 ms 00:20:34.495 [2024-11-25 23:22:06.604018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.616095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.616120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:34.495 [2024-11-25 23:22:06.616133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.048 ms 00:20:34.495 [2024-11-25 23:22:06.616140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.616232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.616239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:34.495 [2024-11-25 23:22:06.616253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:34.495 [2024-11-25 23:22:06.616259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.634756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.634780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:34.495 [2024-11-25 23:22:06.634787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:20:34.495 [2024-11-25 23:22:06.634792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.652814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.652838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:34.495 [2024-11-25 23:22:06.652845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.994 ms 00:20:34.495 [2024-11-25 23:22:06.652856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.670354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.670379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:34.495 [2024-11-25 23:22:06.670386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.387 ms 00:20:34.495 [2024-11-25 23:22:06.670392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.687824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.495 [2024-11-25 23:22:06.687848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:34.495 [2024-11-25 23:22:06.687855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:20:34.495 [2024-11-25 23:22:06.687861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.495 [2024-11-25 23:22:06.687887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:34.495 [2024-11-25 23:22:06.687899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.687994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:34.495 [2024-11-25 23:22:06.688307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:34.496 [2024-11-25 23:22:06.688538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:34.496 [2024-11-25 23:22:06.688544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:20:34.496 [2024-11-25 23:22:06.688550] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:34.496 [2024-11-25 23:22:06.688556] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:34.496 [2024-11-25 23:22:06.688562] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:34.496 [2024-11-25 23:22:06.688568] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:34.496 [2024-11-25 23:22:06.688573] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:34.496 [2024-11-25 23:22:06.688579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:34.496 [2024-11-25 23:22:06.688586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:34.496 [2024-11-25 23:22:06.688591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:34.496 [2024-11-25 23:22:06.688596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:34.496 [2024-11-25 23:22:06.688601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.496 [2024-11-25 23:22:06.688606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:34.496 [2024-11-25 23:22:06.688613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:20:34.496 [2024-11-25 23:22:06.688618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.698002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.496 [2024-11-25 23:22:06.698025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:34.496 [2024-11-25 23:22:06.698033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:20:34.496 [2024-11-25 23:22:06.698039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.698345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.496 [2024-11-25 23:22:06.698354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:34.496 [2024-11-25 23:22:06.698361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:34.496 [2024-11-25 23:22:06.698366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.727530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.727654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.496 [2024-11-25 23:22:06.727668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.727679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.727733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.727740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.496 [2024-11-25 23:22:06.727746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.727752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.727789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.727797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.496 [2024-11-25 23:22:06.727804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.727810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.727826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.727832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.496 [2024-11-25 23:22:06.727838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.727843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.790345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.790478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.496 [2024-11-25 23:22:06.790492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.790499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.496 [2024-11-25 23:22:06.842139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.496 [2024-11-25 23:22:06.842209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.496 [2024-11-25 23:22:06.842257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.496 [2024-11-25 23:22:06.842353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:34.496 [2024-11-25 23:22:06.842402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.496 [2024-11-25 23:22:06.842456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.496 [2024-11-25 23:22:06.842514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.496 [2024-11-25 23:22:06.842521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.496 [2024-11-25 23:22:06.842527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.496 [2024-11-25 23:22:06.842658] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.168 ms, result 0 00:20:35.068 00:20:35.068 00:20:35.330 23:22:07 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77068 00:20:35.330 23:22:07 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77068 00:20:35.330 23:22:07 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77068 ']' 00:20:35.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.330 23:22:07 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.330 23:22:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.330 23:22:07 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:35.330 23:22:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.330 23:22:07 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.330 23:22:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:35.331 [2024-11-25 23:22:07.507879] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:20:35.331 [2024-11-25 23:22:07.507999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77068 ] 00:20:35.331 [2024-11-25 23:22:07.663000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.591 [2024-11-25 23:22:07.746728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.161 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.161 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:36.161 23:22:08 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:36.424 [2024-11-25 23:22:08.541553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.424 [2024-11-25 23:22:08.541605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.424 [2024-11-25 23:22:08.698659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.698695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.424 [2024-11-25 23:22:08.698708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.424 [2024-11-25 23:22:08.698715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.700942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.700971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.424 [2024-11-25 23:22:08.700980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:20:36.424 [2024-11-25 23:22:08.700986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.701051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.424 [2024-11-25 23:22:08.701633] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.424 [2024-11-25 23:22:08.701651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.701659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.424 [2024-11-25 23:22:08.701668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:20:36.424 [2024-11-25 23:22:08.701674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.702939] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.424 [2024-11-25 23:22:08.713504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.713534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.424 [2024-11-25 23:22:08.713544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.569 ms 00:20:36.424 [2024-11-25 23:22:08.713552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.713617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.713627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.424 [2024-11-25 23:22:08.713634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:36.424 [2024-11-25 23:22:08.713642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.719873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.719903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.424 [2024-11-25 23:22:08.719911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:20:36.424 [2024-11-25 23:22:08.719918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.720003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.720013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.424 [2024-11-25 23:22:08.720019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:36.424 [2024-11-25 23:22:08.720030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.720052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.720085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.424 [2024-11-25 23:22:08.720092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.424 [2024-11-25 23:22:08.720099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.720117] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:36.424 [2024-11-25 23:22:08.723118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.723139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.424 [2024-11-25 23:22:08.723149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.003 ms 00:20:36.424 [2024-11-25 23:22:08.723155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.723185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.424 [2024-11-25 23:22:08.723191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.424 [2024-11-25 23:22:08.723200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:36.424 [2024-11-25 23:22:08.723207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.424 [2024-11-25 23:22:08.723224] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.424 [2024-11-25 23:22:08.723240] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.424 [2024-11-25 23:22:08.723274] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.424 [2024-11-25 23:22:08.723285] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.424 [2024-11-25 23:22:08.723373] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.424 [2024-11-25 23:22:08.723382] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.424 [2024-11-25 23:22:08.723395] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.424 [2024-11-25 23:22:08.723403] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.424 [2024-11-25 23:22:08.723412] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.424 [2024-11-25 23:22:08.723419] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:36.424 [2024-11-25 23:22:08.723427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.424 [2024-11-25 23:22:08.723433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.425 [2024-11-25 23:22:08.723442] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.425 [2024-11-25 23:22:08.723448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.425 [2024-11-25 23:22:08.723455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.425 [2024-11-25 23:22:08.723461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:20:36.425 [2024-11-25 23:22:08.723468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.425 [2024-11-25 23:22:08.723537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.425 [2024-11-25 23:22:08.723545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.425 [2024-11-25 23:22:08.723551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:36.425 [2024-11-25 23:22:08.723558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.425 [2024-11-25 23:22:08.723642] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.425 [2024-11-25 23:22:08.723652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.425 [2024-11-25 23:22:08.723658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.425 [2024-11-25 23:22:08.723680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.425 [2024-11-25 23:22:08.723702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.425 [2024-11-25 23:22:08.723714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.425 [2024-11-25 23:22:08.723722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:36.425 [2024-11-25 23:22:08.723728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.425 [2024-11-25 23:22:08.723735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.425 [2024-11-25 23:22:08.723740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:36.425 [2024-11-25 23:22:08.723747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.425 [2024-11-25 23:22:08.723759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.425 [2024-11-25 23:22:08.723781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.425 [2024-11-25 23:22:08.723800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.425 [2024-11-25 23:22:08.723817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.425 [2024-11-25 23:22:08.723835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.425 [2024-11-25 23:22:08.723852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.425 [2024-11-25 23:22:08.723864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.425 [2024-11-25 23:22:08.723871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:36.425 [2024-11-25 23:22:08.723876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.425 [2024-11-25 23:22:08.723882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.425 [2024-11-25 23:22:08.723888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:36.425 [2024-11-25 23:22:08.723896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.425 [2024-11-25 23:22:08.723908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:36.425 [2024-11-25 23:22:08.723914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723922] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.425 [2024-11-25 23:22:08.723930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.425 [2024-11-25 23:22:08.723937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.425 [2024-11-25 23:22:08.723951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.425 [2024-11-25 23:22:08.723957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.425 [2024-11-25 23:22:08.723964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.425 [2024-11-25 23:22:08.723970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.425 [2024-11-25 23:22:08.723976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.425 [2024-11-25 23:22:08.723982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.425 [2024-11-25 23:22:08.723990] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.425 [2024-11-25 23:22:08.723997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:36.425 [2024-11-25 23:22:08.724012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:36.425 [2024-11-25 23:22:08.724020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:36.425 [2024-11-25 23:22:08.724026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:36.425 [2024-11-25 23:22:08.724033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:36.425 [2024-11-25 23:22:08.724038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:36.425 [2024-11-25 23:22:08.724045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:36.425 [2024-11-25 23:22:08.724051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:36.425 [2024-11-25 23:22:08.724072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:36.425 [2024-11-25 23:22:08.724078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:36.425 [2024-11-25 23:22:08.724110] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.425 [2024-11-25 23:22:08.724117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.425 [2024-11-25 23:22:08.724133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.425 [2024-11-25 23:22:08.724141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.425 [2024-11-25 23:22:08.724147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.425 [2024-11-25 23:22:08.724156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.425 [2024-11-25 23:22:08.724162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.425 [2024-11-25 23:22:08.724171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:20:36.425 [2024-11-25 23:22:08.724179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.425 [2024-11-25 23:22:08.748458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.425 [2024-11-25 23:22:08.748486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.425 [2024-11-25 23:22:08.748496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.215 ms 00:20:36.425 [2024-11-25 23:22:08.748505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.425 [2024-11-25 23:22:08.748599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.425 [2024-11-25 23:22:08.748607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.425 [2024-11-25 23:22:08.748615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:36.425 [2024-11-25 23:22:08.748621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.425 [2024-11-25 23:22:08.774888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.425 [2024-11-25 23:22:08.774916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.425 [2024-11-25 23:22:08.774925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.247 ms 00:20:36.425 [2024-11-25 23:22:08.774931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.426 [2024-11-25 23:22:08.774978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.426 [2024-11-25 23:22:08.774985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.426 [2024-11-25 23:22:08.774994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.426 [2024-11-25 23:22:08.775000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.426 [2024-11-25 23:22:08.775392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.426 [2024-11-25 23:22:08.775405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.426 [2024-11-25 23:22:08.775415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:36.426 [2024-11-25 23:22:08.775422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.426 [2024-11-25 23:22:08.775538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.426 [2024-11-25 23:22:08.775545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.426 [2024-11-25 23:22:08.775553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:36.426 [2024-11-25 23:22:08.775559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.788949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.788974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.687 [2024-11-25 23:22:08.788984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.371 ms 00:20:36.687 [2024-11-25 23:22:08.788990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.799738] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:36.687 [2024-11-25 23:22:08.799764] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.687 [2024-11-25 23:22:08.799775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.799782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.687 [2024-11-25 23:22:08.799791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.677 ms 00:20:36.687 [2024-11-25 23:22:08.799801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.818577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.818602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.687 [2024-11-25 23:22:08.818613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.702 ms 00:20:36.687 [2024-11-25 23:22:08.818619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.828015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.828040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.687 [2024-11-25 23:22:08.828052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.339 ms 00:20:36.687 [2024-11-25 23:22:08.828068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.837308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.837331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.687 [2024-11-25 23:22:08.837340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.198 ms 00:20:36.687 [2024-11-25 23:22:08.837346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.837809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.837819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.687 [2024-11-25 23:22:08.837827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:20:36.687 [2024-11-25 23:22:08.837833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.898050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.898095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:36.687 [2024-11-25 23:22:08.898109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.197 ms 00:20:36.687 [2024-11-25 23:22:08.898116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.906299] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:36.687 [2024-11-25 23:22:08.920818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.687 [2024-11-25 23:22:08.921002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.687 [2024-11-25 23:22:08.921019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.638 ms 00:20:36.687 [2024-11-25 23:22:08.921027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.687 [2024-11-25 23:22:08.921127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.688 [2024-11-25 23:22:08.921138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:36.688 [2024-11-25 23:22:08.921145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:36.688 [2024-11-25 23:22:08.921153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.688 [2024-11-25 23:22:08.921199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.688 [2024-11-25 23:22:08.921208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.688 [2024-11-25 23:22:08.921215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:36.688 [2024-11-25 23:22:08.921224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.688 [2024-11-25 23:22:08.921244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.688 [2024-11-25 23:22:08.921252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.688 [2024-11-25 23:22:08.921259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.688 [2024-11-25 23:22:08.921268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.688 [2024-11-25 23:22:08.921295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:36.688 [2024-11-25 23:22:08.921307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.688 [2024-11-25 23:22:08.921315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:36.688 [2024-11-25 23:22:08.921323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:36.688 [2024-11-25 23:22:08.921328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.688 [2024-11-25 23:22:08.940107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.688 [2024-11-25 23:22:08.940135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.688 [2024-11-25 23:22:08.940146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.757 ms 00:20:36.688 [2024-11-25 23:22:08.940153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.688 [2024-11-25 23:22:08.940230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.688 [2024-11-25 23:22:08.940239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.688 [2024-11-25 23:22:08.940249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:36.688 [2024-11-25 23:22:08.940255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.688 [2024-11-25 23:22:08.941052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.688 [2024-11-25 23:22:08.943291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 242.123 ms, result 0 00:20:36.688 [2024-11-25 23:22:08.945208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.688 Some configs were skipped because the RPC state that can call them passed over. 00:20:36.688 23:22:08 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:36.954 [2024-11-25 23:22:09.170619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.954 [2024-11-25 23:22:09.170655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:36.954 [2024-11-25 23:22:09.170665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:20:36.954 [2024-11-25 23:22:09.170674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.954 [2024-11-25 23:22:09.170698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.863 ms, result 0 00:20:36.954 true 00:20:36.954 23:22:09 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:37.219 [2024-11-25 23:22:09.369844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.219 [2024-11-25 23:22:09.369952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:37.219 [2024-11-25 23:22:09.369967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:20:37.219 [2024-11-25 23:22:09.369973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.219 [2024-11-25 23:22:09.370002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.984 ms, result 0 00:20:37.219 true 00:20:37.219 23:22:09 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77068 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77068 ']' 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77068 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77068 00:20:37.219 killing process with pid 77068 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77068' 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77068 00:20:37.219 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77068 00:20:37.793 [2024-11-25 23:22:09.981402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:09.981455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.793 [2024-11-25 23:22:09.981466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.793 [2024-11-25 23:22:09.981474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:09.981495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.793 [2024-11-25 23:22:09.983636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:09.983663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.793 [2024-11-25 23:22:09.983675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:20:37.793 [2024-11-25 23:22:09.983682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:09.983924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:09.983932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.793 [2024-11-25 23:22:09.983940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:20:37.793 [2024-11-25 23:22:09.983947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:09.987617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:09.987789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.793 [2024-11-25 23:22:09.987808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:20:37.793 [2024-11-25 23:22:09.987815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:09.993063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:09.993088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.793 [2024-11-25 23:22:09.993097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.206 ms 00:20:37.793 [2024-11-25 23:22:09.993104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.001420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.001450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.793 [2024-11-25 23:22:10.001461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.266 ms 00:20:37.793 [2024-11-25 23:22:10.001467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.009017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.009044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.793 [2024-11-25 23:22:10.009063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:20:37.793 [2024-11-25 23:22:10.009071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.009197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.009206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.793 [2024-11-25 23:22:10.009215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:37.793 [2024-11-25 23:22:10.009222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.018369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.018504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.793 [2024-11-25 23:22:10.018520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.130 ms 00:20:37.793 [2024-11-25 23:22:10.018527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.026898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.026921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.793 [2024-11-25 23:22:10.026933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.341 ms 00:20:37.793 [2024-11-25 23:22:10.026939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.034701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.034725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.793 [2024-11-25 23:22:10.034734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.727 ms 00:20:37.793 [2024-11-25 23:22:10.034740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.042489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.793 [2024-11-25 23:22:10.042513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.793 [2024-11-25 23:22:10.042523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.693 ms 00:20:37.793 [2024-11-25 23:22:10.042528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.793 [2024-11-25 23:22:10.042567] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.793 [2024-11-25 23:22:10.042580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.793 [2024-11-25 23:22:10.042682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.042996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.794 [2024-11-25 23:22:10.043324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.794 [2024-11-25 23:22:10.043336] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:20:37.794 [2024-11-25 23:22:10.043346] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.794 [2024-11-25 23:22:10.043353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.794 [2024-11-25 23:22:10.043359] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.794 [2024-11-25 23:22:10.043367] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.795 [2024-11-25 23:22:10.043373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.795 [2024-11-25 23:22:10.043382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.795 [2024-11-25 23:22:10.043388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.795 [2024-11-25 23:22:10.043394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.795 [2024-11-25 23:22:10.043399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.795 [2024-11-25 23:22:10.043406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-25 23:22:10.043412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.795 [2024-11-25 23:22:10.043420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:20:37.795 [2024-11-25 23:22:10.043426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.054300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-25 23:22:10.054323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.795 [2024-11-25 23:22:10.054335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.853 ms 00:20:37.795 [2024-11-25 23:22:10.054341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.054659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.795 [2024-11-25 23:22:10.054668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.795 [2024-11-25 23:22:10.054679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:20:37.795 [2024-11-25 23:22:10.054686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.091682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.795 [2024-11-25 23:22:10.091711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.795 [2024-11-25 23:22:10.091722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.795 [2024-11-25 23:22:10.091729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.091830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.795 [2024-11-25 23:22:10.091838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.795 [2024-11-25 23:22:10.091849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.795 [2024-11-25 23:22:10.091855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.091895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.795 [2024-11-25 23:22:10.091903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.795 [2024-11-25 23:22:10.091912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.795 [2024-11-25 23:22:10.091919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.091935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.795 [2024-11-25 23:22:10.091942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.795 [2024-11-25 23:22:10.091949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.795 [2024-11-25 23:22:10.091957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.795 [2024-11-25 23:22:10.154301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.795 [2024-11-25 23:22:10.154336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.795 [2024-11-25 23:22:10.154347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.795 [2024-11-25 23:22:10.154353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.055 [2024-11-25 23:22:10.205244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.055 [2024-11-25 23:22:10.205423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.056 [2024-11-25 23:22:10.205439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.205528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.056 [2024-11-25 23:22:10.205536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.056 [2024-11-25 23:22:10.205546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.205581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.056 [2024-11-25 23:22:10.205588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.056 [2024-11-25 23:22:10.205595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.205688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.056 [2024-11-25 23:22:10.205697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.056 [2024-11-25 23:22:10.205705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.205740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.056 [2024-11-25 23:22:10.205748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.056 [2024-11-25 23:22:10.205756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.205800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.056 [2024-11-25 23:22:10.205809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.056 [2024-11-25 23:22:10.205819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.205868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.056 [2024-11-25 23:22:10.205875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.056 [2024-11-25 23:22:10.205884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.056 [2024-11-25 23:22:10.205890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.056 [2024-11-25 23:22:10.206018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 224.594 ms, result 0 00:20:38.626 23:22:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.626 [2024-11-25 23:22:10.822191] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:20:38.626 [2024-11-25 23:22:10.822324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77116 ] 00:20:38.626 [2024-11-25 23:22:10.978548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.885 [2024-11-25 23:22:11.068434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.148 [2024-11-25 23:22:11.295117] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.148 [2024-11-25 23:22:11.295167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.148 [2024-11-25 23:22:11.451413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.451588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.148 [2024-11-25 23:22:11.451606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.148 [2024-11-25 23:22:11.451614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.453827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.453856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.148 [2024-11-25 23:22:11.453863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:20:39.148 [2024-11-25 23:22:11.453869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.453933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.148 [2024-11-25 23:22:11.454465] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.148 [2024-11-25 23:22:11.454478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.454485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.148 [2024-11-25 23:22:11.454492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:20:39.148 [2024-11-25 23:22:11.454498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.455786] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.148 [2024-11-25 23:22:11.466183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.466209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.148 [2024-11-25 23:22:11.466218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.398 ms 00:20:39.148 [2024-11-25 23:22:11.466224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.466294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.466303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.148 [2024-11-25 23:22:11.466310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:39.148 [2024-11-25 23:22:11.466316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.472545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.472570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.148 [2024-11-25 23:22:11.472577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.198 ms 00:20:39.148 [2024-11-25 23:22:11.472583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.472655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.472663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.148 [2024-11-25 23:22:11.472670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:39.148 [2024-11-25 23:22:11.472677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.472696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.472703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.148 [2024-11-25 23:22:11.472710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.148 [2024-11-25 23:22:11.472716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.472735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:39.148 [2024-11-25 23:22:11.475675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.475697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.148 [2024-11-25 23:22:11.475705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:20:39.148 [2024-11-25 23:22:11.475711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.475743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.148 [2024-11-25 23:22:11.475749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.148 [2024-11-25 23:22:11.475756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:39.148 [2024-11-25 23:22:11.475761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.148 [2024-11-25 23:22:11.475778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.148 [2024-11-25 23:22:11.475793] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.148 [2024-11-25 23:22:11.475822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.148 [2024-11-25 23:22:11.475835] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:39.148 [2024-11-25 23:22:11.475918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.148 [2024-11-25 23:22:11.475926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.148 [2024-11-25 23:22:11.475934] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.149 [2024-11-25 23:22:11.475945] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.149 [2024-11-25 23:22:11.475953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.149 [2024-11-25 23:22:11.475960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:39.149 [2024-11-25 23:22:11.475966] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.149 [2024-11-25 23:22:11.475972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.149 [2024-11-25 23:22:11.475978] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.149 [2024-11-25 23:22:11.475984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.149 [2024-11-25 23:22:11.475990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.149 [2024-11-25 23:22:11.475996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:39.149 [2024-11-25 23:22:11.476001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.149 [2024-11-25 23:22:11.476079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.149 [2024-11-25 23:22:11.476089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.149 [2024-11-25 23:22:11.476095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:39.149 [2024-11-25 23:22:11.476101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.149 [2024-11-25 23:22:11.476179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.149 [2024-11-25 23:22:11.476188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.149 [2024-11-25 23:22:11.476195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.149 [2024-11-25 23:22:11.476212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.149 [2024-11-25 23:22:11.476229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.149 [2024-11-25 23:22:11.476241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.149 [2024-11-25 23:22:11.476269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:39.149 [2024-11-25 23:22:11.476275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.149 [2024-11-25 23:22:11.476281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.149 [2024-11-25 23:22:11.476288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:39.149 [2024-11-25 23:22:11.476298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.149 [2024-11-25 23:22:11.476309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.149 [2024-11-25 23:22:11.476326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.149 [2024-11-25 23:22:11.476342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.149 [2024-11-25 23:22:11.476357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.149 [2024-11-25 23:22:11.476373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.149 [2024-11-25 23:22:11.476389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.149 [2024-11-25 23:22:11.476399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.149 [2024-11-25 23:22:11.476404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:39.149 [2024-11-25 23:22:11.476409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.149 [2024-11-25 23:22:11.476415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.149 [2024-11-25 23:22:11.476420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:39.149 [2024-11-25 23:22:11.476425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.149 [2024-11-25 23:22:11.476435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:39.149 [2024-11-25 23:22:11.476440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476446] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.149 [2024-11-25 23:22:11.476452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.149 [2024-11-25 23:22:11.476462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.149 [2024-11-25 23:22:11.476474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.149 [2024-11-25 23:22:11.476480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.149 [2024-11-25 23:22:11.476485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.149 [2024-11-25 23:22:11.476490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.149 [2024-11-25 23:22:11.476494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.149 [2024-11-25 23:22:11.476500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.149 [2024-11-25 23:22:11.476505] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.149 [2024-11-25 23:22:11.476513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:39.149 [2024-11-25 23:22:11.476525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:39.149 [2024-11-25 23:22:11.476530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:39.149 [2024-11-25 23:22:11.476535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:39.149 [2024-11-25 23:22:11.476541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:39.149 [2024-11-25 23:22:11.476546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:39.149 [2024-11-25 23:22:11.476552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:39.149 [2024-11-25 23:22:11.476558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:39.149 [2024-11-25 23:22:11.476563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:39.149 [2024-11-25 23:22:11.476569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:39.149 [2024-11-25 23:22:11.476595] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.149 [2024-11-25 23:22:11.476601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.149 [2024-11-25 23:22:11.476613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.149 [2024-11-25 23:22:11.476618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.149 [2024-11-25 23:22:11.476623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.149 [2024-11-25 23:22:11.476629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.149 [2024-11-25 23:22:11.476638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.149 [2024-11-25 23:22:11.476644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:20:39.149 [2024-11-25 23:22:11.476650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.149 [2024-11-25 23:22:11.500884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.149 [2024-11-25 23:22:11.500911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.149 [2024-11-25 23:22:11.500919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.175 ms 00:20:39.149 [2024-11-25 23:22:11.500928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.149 [2024-11-25 23:22:11.501023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.149 [2024-11-25 23:22:11.501030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.150 [2024-11-25 23:22:11.501037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:39.150 [2024-11-25 23:22:11.501043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.410 [2024-11-25 23:22:11.542563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.410 [2024-11-25 23:22:11.542597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.410 [2024-11-25 23:22:11.542606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.489 ms 00:20:39.410 [2024-11-25 23:22:11.542612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.410 [2024-11-25 23:22:11.542671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.410 [2024-11-25 23:22:11.542680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.410 [2024-11-25 23:22:11.542687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:39.410 [2024-11-25 23:22:11.542693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.410 [2024-11-25 23:22:11.543115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.410 [2024-11-25 23:22:11.543130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.410 [2024-11-25 23:22:11.543143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:20:39.410 [2024-11-25 23:22:11.543149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.410 [2024-11-25 23:22:11.543266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.410 [2024-11-25 23:22:11.543275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.410 [2024-11-25 23:22:11.543282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:39.411 [2024-11-25 23:22:11.543288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.555507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.555531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.411 [2024-11-25 23:22:11.555539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:20:39.411 [2024-11-25 23:22:11.555545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.566242] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:39.411 [2024-11-25 23:22:11.566269] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.411 [2024-11-25 23:22:11.566279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.566286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.411 [2024-11-25 23:22:11.566293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.658 ms 00:20:39.411 [2024-11-25 23:22:11.566299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.585016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.585045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.411 [2024-11-25 23:22:11.585054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.660 ms 00:20:39.411 [2024-11-25 23:22:11.585071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.594466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.594497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.411 [2024-11-25 23:22:11.594507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.116 ms 00:20:39.411 [2024-11-25 23:22:11.594513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.603099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.603124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.411 [2024-11-25 23:22:11.603132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.540 ms 00:20:39.411 [2024-11-25 23:22:11.603138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.603610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.603621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.411 [2024-11-25 23:22:11.603628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:20:39.411 [2024-11-25 23:22:11.603636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.651514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.651545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:39.411 [2024-11-25 23:22:11.651554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.860 ms 00:20:39.411 [2024-11-25 23:22:11.651562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.660038] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:39.411 [2024-11-25 23:22:11.674678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.674706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.411 [2024-11-25 23:22:11.674719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.042 ms 00:20:39.411 [2024-11-25 23:22:11.674726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.674791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.674799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:39.411 [2024-11-25 23:22:11.674806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:39.411 [2024-11-25 23:22:11.674812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.674855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.674867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.411 [2024-11-25 23:22:11.674876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:39.411 [2024-11-25 23:22:11.674884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.674906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.674913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.411 [2024-11-25 23:22:11.674919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.411 [2024-11-25 23:22:11.674925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.674953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:39.411 [2024-11-25 23:22:11.674960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.674967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:39.411 [2024-11-25 23:22:11.674973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:39.411 [2024-11-25 23:22:11.674979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.693781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.693808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.411 [2024-11-25 23:22:11.693817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.785 ms 00:20:39.411 [2024-11-25 23:22:11.693824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.693901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.411 [2024-11-25 23:22:11.693910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.411 [2024-11-25 23:22:11.693917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:39.411 [2024-11-25 23:22:11.693925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.411 [2024-11-25 23:22:11.694742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.411 [2024-11-25 23:22:11.696989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.073 ms, result 0 00:20:39.411 [2024-11-25 23:22:11.698140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.411 [2024-11-25 23:22:11.708882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.798  [2024-11-25T23:22:14.113Z] Copying: 15/256 [MB] (15 MBps) [2024-11-25T23:22:15.057Z] Copying: 27/256 [MB] (11 MBps) [2024-11-25T23:22:16.002Z] Copying: 38/256 [MB] (11 MBps) [2024-11-25T23:22:16.946Z] Copying: 49/256 [MB] (10 MBps) [2024-11-25T23:22:17.890Z] Copying: 60/256 [MB] (10 MBps) [2024-11-25T23:22:18.834Z] Copying: 72/256 [MB] (11 MBps) [2024-11-25T23:22:19.793Z] Copying: 84/256 [MB] (11 MBps) [2024-11-25T23:22:21.181Z] Copying: 96/256 [MB] (12 MBps) [2024-11-25T23:22:21.756Z] Copying: 108/256 [MB] (11 MBps) [2024-11-25T23:22:23.228Z] Copying: 118/256 [MB] (10 MBps) [2024-11-25T23:22:23.829Z] Copying: 129/256 [MB] (10 MBps) [2024-11-25T23:22:24.773Z] Copying: 140/256 [MB] (11 MBps) [2024-11-25T23:22:26.160Z] Copying: 152/256 [MB] (11 MBps) [2024-11-25T23:22:27.105Z] Copying: 163/256 [MB] (11 MBps) [2024-11-25T23:22:28.049Z] Copying: 175/256 [MB] (11 MBps) [2024-11-25T23:22:28.993Z] Copying: 185/256 [MB] (10 MBps) [2024-11-25T23:22:29.936Z] Copying: 196/256 [MB] (10 MBps) [2024-11-25T23:22:30.882Z] Copying: 207/256 [MB] (10 MBps) [2024-11-25T23:22:31.829Z] Copying: 219/256 [MB] (11 MBps) [2024-11-25T23:22:32.774Z] Copying: 235/256 [MB] (15 MBps) [2024-11-25T23:22:34.163Z] Copying: 245/256 [MB] (10 MBps) [2024-11-25T23:22:34.163Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-25 23:22:33.754192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:01.794 [2024-11-25 23:22:33.769201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.794 [2024-11-25 23:22:33.769252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:01.794 [2024-11-25 23:22:33.769282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:01.794 [2024-11-25 23:22:33.769292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.794 [2024-11-25 23:22:33.769317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:01.794 [2024-11-25 23:22:33.772635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.794 [2024-11-25 23:22:33.772678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:01.794 [2024-11-25 23:22:33.772691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:21:01.794 [2024-11-25 23:22:33.772700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.794 [2024-11-25 23:22:33.773010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.794 [2024-11-25 23:22:33.773022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:01.794 [2024-11-25 23:22:33.773033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:01.794 [2024-11-25 23:22:33.773041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.794 [2024-11-25 23:22:33.776961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.794 [2024-11-25 23:22:33.776990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:01.794 [2024-11-25 23:22:33.777001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:21:01.794 [2024-11-25 23:22:33.777010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.783923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.784136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:01.795 [2024-11-25 23:22:33.784161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:21:01.795 [2024-11-25 23:22:33.784170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.809401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.809449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:01.795 [2024-11-25 23:22:33.809462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.154 ms 00:21:01.795 [2024-11-25 23:22:33.809470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.826648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.826697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:01.795 [2024-11-25 23:22:33.826710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.130 ms 00:21:01.795 [2024-11-25 23:22:33.826718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.826874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.826886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:01.795 [2024-11-25 23:22:33.826910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:01.795 [2024-11-25 23:22:33.826919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.852986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.853030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:01.795 [2024-11-25 23:22:33.853042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.049 ms 00:21:01.795 [2024-11-25 23:22:33.853049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.878339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.878532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:01.795 [2024-11-25 23:22:33.878554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.206 ms 00:21:01.795 [2024-11-25 23:22:33.878562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.903566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.903613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:01.795 [2024-11-25 23:22:33.903625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.877 ms 00:21:01.795 [2024-11-25 23:22:33.903632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.928275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.795 [2024-11-25 23:22:33.928317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:01.795 [2024-11-25 23:22:33.928329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.549 ms 00:21:01.795 [2024-11-25 23:22:33.928336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.795 [2024-11-25 23:22:33.928382] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:01.795 [2024-11-25 23:22:33.928400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:01.795 [2024-11-25 23:22:33.928758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.928997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:01.796 [2024-11-25 23:22:33.929279] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:01.796 [2024-11-25 23:22:33.929289] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a832dbc-fe5a-4899-8dc2-20f67e9df730 00:21:01.796 [2024-11-25 23:22:33.929298] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:01.796 [2024-11-25 23:22:33.929305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:01.796 [2024-11-25 23:22:33.929313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:01.796 [2024-11-25 23:22:33.929322] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:01.796 [2024-11-25 23:22:33.929331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:01.796 [2024-11-25 23:22:33.929339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:01.796 [2024-11-25 23:22:33.929350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:01.796 [2024-11-25 23:22:33.929385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:01.796 [2024-11-25 23:22:33.929391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:01.796 [2024-11-25 23:22:33.929399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.796 [2024-11-25 23:22:33.929406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:01.796 [2024-11-25 23:22:33.929415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:21:01.796 [2024-11-25 23:22:33.929424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.796 [2024-11-25 23:22:33.943849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.796 [2024-11-25 23:22:33.943891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:01.796 [2024-11-25 23:22:33.943902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.390 ms 00:21:01.796 [2024-11-25 23:22:33.943917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:33.944408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.797 [2024-11-25 23:22:33.944423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:01.797 [2024-11-25 23:22:33.944434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:21:01.797 [2024-11-25 23:22:33.944441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:33.986179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:33.986392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.797 [2024-11-25 23:22:33.986413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:33.986428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:33.986535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:33.986546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.797 [2024-11-25 23:22:33.986555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:33.986564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:33.986617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:33.986629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.797 [2024-11-25 23:22:33.986638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:33.986647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:33.986669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:33.986678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.797 [2024-11-25 23:22:33.986686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:33.986693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.077125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.077360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.797 [2024-11-25 23:22:34.077386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.077405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.150836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.150896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.797 [2024-11-25 23:22:34.150910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.150920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.151034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.797 [2024-11-25 23:22:34.151044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.151053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.151136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.797 [2024-11-25 23:22:34.151149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.151159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.151285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.797 [2024-11-25 23:22:34.151295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.151304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.151362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:01.797 [2024-11-25 23:22:34.151373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.151383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.151447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.797 [2024-11-25 23:22:34.151457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.151466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.797 [2024-11-25 23:22:34.151540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.797 [2024-11-25 23:22:34.151551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.797 [2024-11-25 23:22:34.151560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.797 [2024-11-25 23:22:34.151748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.529 ms, result 0 00:21:02.741 00:21:02.741 00:21:02.741 23:22:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:03.316 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:03.316 23:22:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77068 00:21:03.316 23:22:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77068 ']' 00:21:03.316 Process with pid 77068 is not found 00:21:03.316 23:22:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77068 00:21:03.316 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77068) - No such process 00:21:03.316 23:22:35 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77068 is not found' 00:21:03.316 ************************************ 00:21:03.316 END TEST ftl_trim 00:21:03.316 ************************************ 00:21:03.316 00:21:03.316 real 1m32.544s 00:21:03.316 user 1m48.892s 00:21:03.316 sys 0m13.806s 00:21:03.316 23:22:35 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.316 23:22:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:03.316 23:22:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:03.316 23:22:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:03.316 23:22:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.316 23:22:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:03.316 ************************************ 00:21:03.316 START TEST ftl_restore 00:21:03.316 ************************************ 00:21:03.316 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:03.316 * Looking for test storage... 00:21:03.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.316 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:03.316 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:03.316 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.579 23:22:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.579 --rc genhtml_branch_coverage=1 00:21:03.579 --rc genhtml_function_coverage=1 00:21:03.579 --rc genhtml_legend=1 00:21:03.579 --rc geninfo_all_blocks=1 00:21:03.579 --rc geninfo_unexecuted_blocks=1 00:21:03.579 00:21:03.579 ' 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.579 --rc genhtml_branch_coverage=1 00:21:03.579 --rc genhtml_function_coverage=1 00:21:03.579 --rc genhtml_legend=1 00:21:03.579 --rc geninfo_all_blocks=1 00:21:03.579 --rc geninfo_unexecuted_blocks=1 00:21:03.579 00:21:03.579 ' 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.579 --rc genhtml_branch_coverage=1 00:21:03.579 --rc genhtml_function_coverage=1 00:21:03.579 --rc genhtml_legend=1 00:21:03.579 --rc geninfo_all_blocks=1 00:21:03.579 --rc geninfo_unexecuted_blocks=1 00:21:03.579 00:21:03.579 ' 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:03.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.579 --rc genhtml_branch_coverage=1 00:21:03.579 --rc genhtml_function_coverage=1 00:21:03.579 --rc genhtml_legend=1 00:21:03.579 --rc geninfo_all_blocks=1 00:21:03.579 --rc geninfo_unexecuted_blocks=1 00:21:03.579 00:21:03.579 ' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.MSHiJkHcgN 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77436 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:03.579 23:22:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77436 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77436 ']' 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.579 23:22:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:03.579 [2024-11-25 23:22:35.832394] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:21:03.580 [2024-11-25 23:22:35.832602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77436 ] 00:21:03.840 [2024-11-25 23:22:35.986711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.840 [2024-11-25 23:22:36.076732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.417 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.417 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:04.417 23:22:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:04.417 23:22:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:04.418 23:22:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:04.418 23:22:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:04.418 23:22:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:04.418 23:22:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:04.679 23:22:36 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:04.679 23:22:36 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:04.679 23:22:36 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:04.679 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:04.679 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.679 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:04.679 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:04.679 23:22:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.941 { 00:21:04.941 "name": "nvme0n1", 00:21:04.941 "aliases": [ 00:21:04.941 "fd619698-e378-49dc-bcca-e3939a7ebe51" 00:21:04.941 ], 00:21:04.941 "product_name": "NVMe disk", 00:21:04.941 "block_size": 4096, 00:21:04.941 "num_blocks": 1310720, 00:21:04.941 "uuid": "fd619698-e378-49dc-bcca-e3939a7ebe51", 00:21:04.941 "numa_id": -1, 00:21:04.941 "assigned_rate_limits": { 00:21:04.941 "rw_ios_per_sec": 0, 00:21:04.941 "rw_mbytes_per_sec": 0, 00:21:04.941 "r_mbytes_per_sec": 0, 00:21:04.941 "w_mbytes_per_sec": 0 00:21:04.941 }, 00:21:04.941 "claimed": true, 00:21:04.941 "claim_type": "read_many_write_one", 00:21:04.941 "zoned": false, 00:21:04.941 "supported_io_types": { 00:21:04.941 "read": true, 00:21:04.941 "write": true, 00:21:04.941 "unmap": true, 00:21:04.941 "flush": true, 00:21:04.941 "reset": true, 00:21:04.941 "nvme_admin": true, 00:21:04.941 "nvme_io": true, 00:21:04.941 "nvme_io_md": false, 00:21:04.941 "write_zeroes": true, 00:21:04.941 "zcopy": false, 00:21:04.941 "get_zone_info": false, 00:21:04.941 "zone_management": false, 00:21:04.941 "zone_append": false, 00:21:04.941 "compare": true, 00:21:04.941 "compare_and_write": false, 00:21:04.941 "abort": true, 00:21:04.941 "seek_hole": false, 00:21:04.941 "seek_data": false, 00:21:04.941 "copy": true, 00:21:04.941 "nvme_iov_md": false 00:21:04.941 }, 00:21:04.941 "driver_specific": { 00:21:04.941 "nvme": [ 00:21:04.941 { 00:21:04.941 "pci_address": "0000:00:11.0", 00:21:04.941 "trid": { 00:21:04.941 "trtype": "PCIe", 00:21:04.941 "traddr": "0000:00:11.0" 00:21:04.941 }, 00:21:04.941 "ctrlr_data": { 00:21:04.941 "cntlid": 0, 00:21:04.941 "vendor_id": "0x1b36", 00:21:04.941 "model_number": "QEMU NVMe Ctrl", 00:21:04.941 "serial_number": "12341", 00:21:04.941 "firmware_revision": "8.0.0", 00:21:04.941 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:04.941 "oacs": { 00:21:04.941 "security": 0, 00:21:04.941 "format": 1, 00:21:04.941 "firmware": 0, 00:21:04.941 "ns_manage": 1 00:21:04.941 }, 00:21:04.941 "multi_ctrlr": false, 00:21:04.941 "ana_reporting": false 00:21:04.941 }, 00:21:04.941 "vs": { 00:21:04.941 "nvme_version": "1.4" 00:21:04.941 }, 00:21:04.941 "ns_data": { 00:21:04.941 "id": 1, 00:21:04.941 "can_share": false 00:21:04.941 } 00:21:04.941 } 00:21:04.941 ], 00:21:04.941 "mp_policy": "active_passive" 00:21:04.941 } 00:21:04.941 } 00:21:04.941 ]' 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:04.941 23:22:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:04.942 23:22:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:04.942 23:22:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:04.942 23:22:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:04.942 23:22:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:04.942 23:22:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:05.204 23:22:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562 00:21:05.204 23:22:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:05.204 23:22:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc2abf3b-c4f8-4ca4-b3e4-a2a1e9d58562 00:21:05.466 23:22:37 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:05.728 23:22:37 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a054a0a2-fe61-4999-b23d-95d4e5a6768e 00:21:05.728 23:22:37 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a054a0a2-fe61-4999-b23d-95d4e5a6768e 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:05.728 23:22:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:05.728 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:05.728 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:05.728 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:05.728 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:05.728 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.990 { 00:21:05.990 "name": "f5d1f4d3-02b5-4d7b-9745-5e06efa6e707", 00:21:05.990 "aliases": [ 00:21:05.990 "lvs/nvme0n1p0" 00:21:05.990 ], 00:21:05.990 "product_name": "Logical Volume", 00:21:05.990 "block_size": 4096, 00:21:05.990 "num_blocks": 26476544, 00:21:05.990 "uuid": "f5d1f4d3-02b5-4d7b-9745-5e06efa6e707", 00:21:05.990 "assigned_rate_limits": { 00:21:05.990 "rw_ios_per_sec": 0, 00:21:05.990 "rw_mbytes_per_sec": 0, 00:21:05.990 "r_mbytes_per_sec": 0, 00:21:05.990 "w_mbytes_per_sec": 0 00:21:05.990 }, 00:21:05.990 "claimed": false, 00:21:05.990 "zoned": false, 00:21:05.990 "supported_io_types": { 00:21:05.990 "read": true, 00:21:05.990 "write": true, 00:21:05.990 "unmap": true, 00:21:05.990 "flush": false, 00:21:05.990 "reset": true, 00:21:05.990 "nvme_admin": false, 00:21:05.990 "nvme_io": false, 00:21:05.990 "nvme_io_md": false, 00:21:05.990 "write_zeroes": true, 00:21:05.990 "zcopy": false, 00:21:05.990 "get_zone_info": false, 00:21:05.990 "zone_management": false, 00:21:05.990 "zone_append": false, 00:21:05.990 "compare": false, 00:21:05.990 "compare_and_write": false, 00:21:05.990 "abort": false, 00:21:05.990 "seek_hole": true, 00:21:05.990 "seek_data": true, 00:21:05.990 "copy": false, 00:21:05.990 "nvme_iov_md": false 00:21:05.990 }, 00:21:05.990 "driver_specific": { 00:21:05.990 "lvol": { 00:21:05.990 "lvol_store_uuid": "a054a0a2-fe61-4999-b23d-95d4e5a6768e", 00:21:05.990 "base_bdev": "nvme0n1", 00:21:05.990 "thin_provision": true, 00:21:05.990 "num_allocated_clusters": 0, 00:21:05.990 "snapshot": false, 00:21:05.990 "clone": false, 00:21:05.990 "esnap_clone": false 00:21:05.990 } 00:21:05.990 } 00:21:05.990 } 00:21:05.990 ]' 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:05.990 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:05.990 23:22:38 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:05.990 23:22:38 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:05.990 23:22:38 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:06.252 23:22:38 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:06.252 23:22:38 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:06.252 23:22:38 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:06.252 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:06.252 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:06.252 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:06.252 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:06.252 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:06.513 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:06.514 { 00:21:06.514 "name": "f5d1f4d3-02b5-4d7b-9745-5e06efa6e707", 00:21:06.514 "aliases": [ 00:21:06.514 "lvs/nvme0n1p0" 00:21:06.514 ], 00:21:06.514 "product_name": "Logical Volume", 00:21:06.514 "block_size": 4096, 00:21:06.514 "num_blocks": 26476544, 00:21:06.514 "uuid": "f5d1f4d3-02b5-4d7b-9745-5e06efa6e707", 00:21:06.514 "assigned_rate_limits": { 00:21:06.514 "rw_ios_per_sec": 0, 00:21:06.514 "rw_mbytes_per_sec": 0, 00:21:06.514 "r_mbytes_per_sec": 0, 00:21:06.514 "w_mbytes_per_sec": 0 00:21:06.514 }, 00:21:06.514 "claimed": false, 00:21:06.514 "zoned": false, 00:21:06.514 "supported_io_types": { 00:21:06.514 "read": true, 00:21:06.514 "write": true, 00:21:06.514 "unmap": true, 00:21:06.514 "flush": false, 00:21:06.514 "reset": true, 00:21:06.514 "nvme_admin": false, 00:21:06.514 "nvme_io": false, 00:21:06.514 "nvme_io_md": false, 00:21:06.514 "write_zeroes": true, 00:21:06.514 "zcopy": false, 00:21:06.514 "get_zone_info": false, 00:21:06.514 "zone_management": false, 00:21:06.514 "zone_append": false, 00:21:06.514 "compare": false, 00:21:06.514 "compare_and_write": false, 00:21:06.514 "abort": false, 00:21:06.514 "seek_hole": true, 00:21:06.514 "seek_data": true, 00:21:06.514 "copy": false, 00:21:06.514 "nvme_iov_md": false 00:21:06.514 }, 00:21:06.514 "driver_specific": { 00:21:06.514 "lvol": { 00:21:06.514 "lvol_store_uuid": "a054a0a2-fe61-4999-b23d-95d4e5a6768e", 00:21:06.514 "base_bdev": "nvme0n1", 00:21:06.514 "thin_provision": true, 00:21:06.514 "num_allocated_clusters": 0, 00:21:06.514 "snapshot": false, 00:21:06.514 "clone": false, 00:21:06.514 "esnap_clone": false 00:21:06.514 } 00:21:06.514 } 00:21:06.514 } 00:21:06.514 ]' 00:21:06.514 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:06.514 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:06.514 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:06.514 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:06.514 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:06.514 23:22:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:06.514 23:22:38 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:06.514 23:22:38 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:06.775 23:22:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:06.775 23:22:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:06.775 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:06.775 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:06.775 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:06.775 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:06.775 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.035 { 00:21:07.035 "name": "f5d1f4d3-02b5-4d7b-9745-5e06efa6e707", 00:21:07.035 "aliases": [ 00:21:07.035 "lvs/nvme0n1p0" 00:21:07.035 ], 00:21:07.035 "product_name": "Logical Volume", 00:21:07.035 "block_size": 4096, 00:21:07.035 "num_blocks": 26476544, 00:21:07.035 "uuid": "f5d1f4d3-02b5-4d7b-9745-5e06efa6e707", 00:21:07.035 "assigned_rate_limits": { 00:21:07.035 "rw_ios_per_sec": 0, 00:21:07.035 "rw_mbytes_per_sec": 0, 00:21:07.035 "r_mbytes_per_sec": 0, 00:21:07.035 "w_mbytes_per_sec": 0 00:21:07.035 }, 00:21:07.035 "claimed": false, 00:21:07.035 "zoned": false, 00:21:07.035 "supported_io_types": { 00:21:07.035 "read": true, 00:21:07.035 "write": true, 00:21:07.035 "unmap": true, 00:21:07.035 "flush": false, 00:21:07.035 "reset": true, 00:21:07.035 "nvme_admin": false, 00:21:07.035 "nvme_io": false, 00:21:07.035 "nvme_io_md": false, 00:21:07.035 "write_zeroes": true, 00:21:07.035 "zcopy": false, 00:21:07.035 "get_zone_info": false, 00:21:07.035 "zone_management": false, 00:21:07.035 "zone_append": false, 00:21:07.035 "compare": false, 00:21:07.035 "compare_and_write": false, 00:21:07.035 "abort": false, 00:21:07.035 "seek_hole": true, 00:21:07.035 "seek_data": true, 00:21:07.035 "copy": false, 00:21:07.035 "nvme_iov_md": false 00:21:07.035 }, 00:21:07.035 "driver_specific": { 00:21:07.035 "lvol": { 00:21:07.035 "lvol_store_uuid": "a054a0a2-fe61-4999-b23d-95d4e5a6768e", 00:21:07.035 "base_bdev": "nvme0n1", 00:21:07.035 "thin_provision": true, 00:21:07.035 "num_allocated_clusters": 0, 00:21:07.035 "snapshot": false, 00:21:07.035 "clone": false, 00:21:07.035 "esnap_clone": false 00:21:07.035 } 00:21:07.035 } 00:21:07.035 } 00:21:07.035 ]' 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:07.035 23:22:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 --l2p_dram_limit 10' 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:07.035 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:07.035 23:22:39 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f5d1f4d3-02b5-4d7b-9745-5e06efa6e707 --l2p_dram_limit 10 -c nvc0n1p0 00:21:07.298 [2024-11-25 23:22:39.523335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.523464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.298 [2024-11-25 23:22:39.523484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:07.298 [2024-11-25 23:22:39.523492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.523539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.523547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.298 [2024-11-25 23:22:39.523556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:07.298 [2024-11-25 23:22:39.523562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.523582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.298 [2024-11-25 23:22:39.524149] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.298 [2024-11-25 23:22:39.524167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.524173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.298 [2024-11-25 23:22:39.524182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:21:07.298 [2024-11-25 23:22:39.524188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.524213] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a5516937-5c6e-4854-a6f9-4ae2e284b0cc 00:21:07.298 [2024-11-25 23:22:39.525472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.525502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:07.298 [2024-11-25 23:22:39.525511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:07.298 [2024-11-25 23:22:39.525522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.532352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.532381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.298 [2024-11-25 23:22:39.532388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.793 ms 00:21:07.298 [2024-11-25 23:22:39.532395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.532493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.532503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.298 [2024-11-25 23:22:39.532510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:07.298 [2024-11-25 23:22:39.532520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.532559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.532569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.298 [2024-11-25 23:22:39.532578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:07.298 [2024-11-25 23:22:39.532585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.532601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.298 [2024-11-25 23:22:39.535831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.535852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.298 [2024-11-25 23:22:39.535863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:21:07.298 [2024-11-25 23:22:39.535868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.535899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.535906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.298 [2024-11-25 23:22:39.535913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:07.298 [2024-11-25 23:22:39.535919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.535933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:07.298 [2024-11-25 23:22:39.536041] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:07.298 [2024-11-25 23:22:39.536067] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.298 [2024-11-25 23:22:39.536077] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:07.298 [2024-11-25 23:22:39.536086] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536093] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536102] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:07.298 [2024-11-25 23:22:39.536108] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.298 [2024-11-25 23:22:39.536117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:07.298 [2024-11-25 23:22:39.536123] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:07.298 [2024-11-25 23:22:39.536130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.536141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.298 [2024-11-25 23:22:39.536149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:21:07.298 [2024-11-25 23:22:39.536155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.536221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.298 [2024-11-25 23:22:39.536228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.298 [2024-11-25 23:22:39.536237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:07.298 [2024-11-25 23:22:39.536243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.298 [2024-11-25 23:22:39.536322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.298 [2024-11-25 23:22:39.536330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.298 [2024-11-25 23:22:39.536337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.298 [2024-11-25 23:22:39.536357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.298 [2024-11-25 23:22:39.536376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.298 [2024-11-25 23:22:39.536389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.298 [2024-11-25 23:22:39.536395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:07.298 [2024-11-25 23:22:39.536401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.298 [2024-11-25 23:22:39.536407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.298 [2024-11-25 23:22:39.536413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:07.298 [2024-11-25 23:22:39.536418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.298 [2024-11-25 23:22:39.536434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.298 [2024-11-25 23:22:39.536456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.298 [2024-11-25 23:22:39.536473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.298 [2024-11-25 23:22:39.536491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.298 [2024-11-25 23:22:39.536508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.298 [2024-11-25 23:22:39.536519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.298 [2024-11-25 23:22:39.536528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.298 [2024-11-25 23:22:39.536540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.298 [2024-11-25 23:22:39.536545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:07.298 [2024-11-25 23:22:39.536551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.298 [2024-11-25 23:22:39.536557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:07.298 [2024-11-25 23:22:39.536563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:07.298 [2024-11-25 23:22:39.536569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.298 [2024-11-25 23:22:39.536575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:07.298 [2024-11-25 23:22:39.536580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:07.299 [2024-11-25 23:22:39.536586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.299 [2024-11-25 23:22:39.536591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.299 [2024-11-25 23:22:39.536598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.299 [2024-11-25 23:22:39.536603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.299 [2024-11-25 23:22:39.536611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.299 [2024-11-25 23:22:39.536617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.299 [2024-11-25 23:22:39.536626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.299 [2024-11-25 23:22:39.536631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.299 [2024-11-25 23:22:39.536640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.299 [2024-11-25 23:22:39.536645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.299 [2024-11-25 23:22:39.536653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.299 [2024-11-25 23:22:39.536661] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.299 [2024-11-25 23:22:39.536671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:07.299 [2024-11-25 23:22:39.536686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:07.299 [2024-11-25 23:22:39.536692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:07.299 [2024-11-25 23:22:39.536698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:07.299 [2024-11-25 23:22:39.536703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:07.299 [2024-11-25 23:22:39.536710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:07.299 [2024-11-25 23:22:39.536716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:07.299 [2024-11-25 23:22:39.536723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:07.299 [2024-11-25 23:22:39.536728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:07.299 [2024-11-25 23:22:39.536736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:07.299 [2024-11-25 23:22:39.536767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:07.299 [2024-11-25 23:22:39.536775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:07.299 [2024-11-25 23:22:39.536788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:07.299 [2024-11-25 23:22:39.536793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:07.299 [2024-11-25 23:22:39.536800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:07.299 [2024-11-25 23:22:39.536805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.299 [2024-11-25 23:22:39.536814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:07.299 [2024-11-25 23:22:39.536820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:21:07.299 [2024-11-25 23:22:39.536826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.299 [2024-11-25 23:22:39.536875] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:07.299 [2024-11-25 23:22:39.536888] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:11.512 [2024-11-25 23:22:43.496630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.496732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:11.512 [2024-11-25 23:22:43.496755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3959.733 ms 00:21:11.512 [2024-11-25 23:22:43.496768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.534675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.534751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.512 [2024-11-25 23:22:43.534768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.630 ms 00:21:11.512 [2024-11-25 23:22:43.534780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.534927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.534942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.512 [2024-11-25 23:22:43.534953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:11.512 [2024-11-25 23:22:43.534973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.575264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.575324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.512 [2024-11-25 23:22:43.575339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.240 ms 00:21:11.512 [2024-11-25 23:22:43.575351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.575393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.575405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.512 [2024-11-25 23:22:43.575415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:11.512 [2024-11-25 23:22:43.575435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.576187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.576243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.512 [2024-11-25 23:22:43.576257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:21:11.512 [2024-11-25 23:22:43.576270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.576402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.576415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.512 [2024-11-25 23:22:43.576430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:11.512 [2024-11-25 23:22:43.576444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.596850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.596933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.512 [2024-11-25 23:22:43.596948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.384 ms 00:21:11.512 [2024-11-25 23:22:43.596960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.611846] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:11.512 [2024-11-25 23:22:43.617092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.617134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.512 [2024-11-25 23:22:43.617149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.038 ms 00:21:11.512 [2024-11-25 23:22:43.617158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.730706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.730767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:11.512 [2024-11-25 23:22:43.730787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.486 ms 00:21:11.512 [2024-11-25 23:22:43.730797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.731024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.731042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.512 [2024-11-25 23:22:43.731080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:21:11.512 [2024-11-25 23:22:43.731089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.757353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.757567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:11.512 [2024-11-25 23:22:43.757598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.200 ms 00:21:11.512 [2024-11-25 23:22:43.757608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.782665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.782713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:11.512 [2024-11-25 23:22:43.782730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:21:11.512 [2024-11-25 23:22:43.782739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.783415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.783437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.512 [2024-11-25 23:22:43.783451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:21:11.512 [2024-11-25 23:22:43.783462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.512 [2024-11-25 23:22:43.874808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.512 [2024-11-25 23:22:43.874857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:11.512 [2024-11-25 23:22:43.874877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.295 ms 00:21:11.512 [2024-11-25 23:22:43.874887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.773 [2024-11-25 23:22:43.903399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.773 [2024-11-25 23:22:43.903448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:11.773 [2024-11-25 23:22:43.903466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.412 ms 00:21:11.773 [2024-11-25 23:22:43.903476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.773 [2024-11-25 23:22:43.929673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.773 [2024-11-25 23:22:43.929718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:11.773 [2024-11-25 23:22:43.929733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.140 ms 00:21:11.773 [2024-11-25 23:22:43.929742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.773 [2024-11-25 23:22:43.956136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.773 [2024-11-25 23:22:43.956182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.773 [2024-11-25 23:22:43.956198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.339 ms 00:21:11.773 [2024-11-25 23:22:43.956206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.773 [2024-11-25 23:22:43.956265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.773 [2024-11-25 23:22:43.956275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.773 [2024-11-25 23:22:43.956293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:11.773 [2024-11-25 23:22:43.956301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.773 [2024-11-25 23:22:43.956403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.773 [2024-11-25 23:22:43.956419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.774 [2024-11-25 23:22:43.956431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:11.774 [2024-11-25 23:22:43.956439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.774 [2024-11-25 23:22:43.957887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4433.938 ms, result 0 00:21:11.774 { 00:21:11.774 "name": "ftl0", 00:21:11.774 "uuid": "a5516937-5c6e-4854-a6f9-4ae2e284b0cc" 00:21:11.774 } 00:21:11.774 23:22:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:11.774 23:22:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:12.035 23:22:44 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:12.035 23:22:44 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:12.035 [2024-11-25 23:22:44.392976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.035 [2024-11-25 23:22:44.393036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:12.035 [2024-11-25 23:22:44.393050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.035 [2024-11-25 23:22:44.393082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.035 [2024-11-25 23:22:44.393109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:12.035 [2024-11-25 23:22:44.396424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.035 [2024-11-25 23:22:44.396465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:12.035 [2024-11-25 23:22:44.396481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:21:12.035 [2024-11-25 23:22:44.396490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.035 [2024-11-25 23:22:44.396785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.035 [2024-11-25 23:22:44.396802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:12.035 [2024-11-25 23:22:44.396819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:21:12.035 [2024-11-25 23:22:44.396829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.400116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.400152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:12.297 [2024-11-25 23:22:44.400166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:21:12.297 [2024-11-25 23:22:44.400175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.406509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.406549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:12.297 [2024-11-25 23:22:44.406568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.309 ms 00:21:12.297 [2024-11-25 23:22:44.406577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.431947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.432162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:12.297 [2024-11-25 23:22:44.432190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.289 ms 00:21:12.297 [2024-11-25 23:22:44.432198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.450791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.450841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:12.297 [2024-11-25 23:22:44.450857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.538 ms 00:21:12.297 [2024-11-25 23:22:44.450867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.451039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.451053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:12.297 [2024-11-25 23:22:44.451091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:12.297 [2024-11-25 23:22:44.451100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.477527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.477574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:12.297 [2024-11-25 23:22:44.477590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.399 ms 00:21:12.297 [2024-11-25 23:22:44.477597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.502751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.502808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:12.297 [2024-11-25 23:22:44.502822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.098 ms 00:21:12.297 [2024-11-25 23:22:44.502831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.527513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.527560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:12.297 [2024-11-25 23:22:44.527575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.623 ms 00:21:12.297 [2024-11-25 23:22:44.527582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.552367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.297 [2024-11-25 23:22:44.552413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:12.297 [2024-11-25 23:22:44.552428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.686 ms 00:21:12.297 [2024-11-25 23:22:44.552436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.297 [2024-11-25 23:22:44.552486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:12.297 [2024-11-25 23:22:44.552503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:12.297 [2024-11-25 23:22:44.552658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.552999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:12.298 [2024-11-25 23:22:44.553561] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:12.298 [2024-11-25 23:22:44.553573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a5516937-5c6e-4854-a6f9-4ae2e284b0cc 00:21:12.298 [2024-11-25 23:22:44.553581] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:12.298 [2024-11-25 23:22:44.553593] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:12.298 [2024-11-25 23:22:44.553606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:12.298 [2024-11-25 23:22:44.553617] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:12.299 [2024-11-25 23:22:44.553625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:12.299 [2024-11-25 23:22:44.553639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:12.299 [2024-11-25 23:22:44.553648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:12.299 [2024-11-25 23:22:44.553657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:12.299 [2024-11-25 23:22:44.553663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:12.299 [2024-11-25 23:22:44.553672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.299 [2024-11-25 23:22:44.553680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:12.299 [2024-11-25 23:22:44.553691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:21:12.299 [2024-11-25 23:22:44.553703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.299 [2024-11-25 23:22:44.567181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.299 [2024-11-25 23:22:44.567223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:12.299 [2024-11-25 23:22:44.567237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.431 ms 00:21:12.299 [2024-11-25 23:22:44.567246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.299 [2024-11-25 23:22:44.567638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.299 [2024-11-25 23:22:44.567658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:12.299 [2024-11-25 23:22:44.567674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:21:12.299 [2024-11-25 23:22:44.567682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.299 [2024-11-25 23:22:44.617203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.299 [2024-11-25 23:22:44.617255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.299 [2024-11-25 23:22:44.617272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.299 [2024-11-25 23:22:44.617282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.299 [2024-11-25 23:22:44.617355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.299 [2024-11-25 23:22:44.617365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.299 [2024-11-25 23:22:44.617380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.299 [2024-11-25 23:22:44.617388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.299 [2024-11-25 23:22:44.617473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.299 [2024-11-25 23:22:44.617486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.299 [2024-11-25 23:22:44.617498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.299 [2024-11-25 23:22:44.617506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.299 [2024-11-25 23:22:44.617531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.299 [2024-11-25 23:22:44.617540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.299 [2024-11-25 23:22:44.617551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.299 [2024-11-25 23:22:44.617562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.708047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.708361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.561 [2024-11-25 23:22:44.708390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.708400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.561 [2024-11-25 23:22:44.781293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.561 [2024-11-25 23:22:44.781468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.561 [2024-11-25 23:22:44.781561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.561 [2024-11-25 23:22:44.781715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:12.561 [2024-11-25 23:22:44.781786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.561 [2024-11-25 23:22:44.781877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.781949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.561 [2024-11-25 23:22:44.781961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.561 [2024-11-25 23:22:44.781972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.561 [2024-11-25 23:22:44.781980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.561 [2024-11-25 23:22:44.782206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.175 ms, result 0 00:21:12.561 true 00:21:12.561 23:22:44 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77436 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77436 ']' 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77436 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77436 00:21:12.561 killing process with pid 77436 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77436' 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77436 00:21:12.561 23:22:44 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77436 00:21:19.157 23:22:50 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:22.459 262144+0 records in 00:21:22.459 262144+0 records out 00:21:22.459 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.56571 s, 301 MB/s 00:21:22.459 23:22:54 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:24.374 23:22:56 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:24.374 [2024-11-25 23:22:56.572410] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:21:24.374 [2024-11-25 23:22:56.572853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77668 ] 00:21:24.374 [2024-11-25 23:22:56.736898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.636 [2024-11-25 23:22:56.878414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.897 [2024-11-25 23:22:57.206112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.897 [2024-11-25 23:22:57.206206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:25.158 [2024-11-25 23:22:57.370074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.370142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:25.158 [2024-11-25 23:22:57.370160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:25.158 [2024-11-25 23:22:57.370170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.370231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.370245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.158 [2024-11-25 23:22:57.370255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:25.158 [2024-11-25 23:22:57.370264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.370286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:25.158 [2024-11-25 23:22:57.371039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:25.158 [2024-11-25 23:22:57.371085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.371095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.158 [2024-11-25 23:22:57.371106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:21:25.158 [2024-11-25 23:22:57.371114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.373357] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:25.158 [2024-11-25 23:22:57.388510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.388561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:25.158 [2024-11-25 23:22:57.388576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.155 ms 00:21:25.158 [2024-11-25 23:22:57.388585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.388670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.388681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:25.158 [2024-11-25 23:22:57.388691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:25.158 [2024-11-25 23:22:57.388700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.400029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.400091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.158 [2024-11-25 23:22:57.400105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.249 ms 00:21:25.158 [2024-11-25 23:22:57.400120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.400204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.400213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.158 [2024-11-25 23:22:57.400223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:25.158 [2024-11-25 23:22:57.400233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.158 [2024-11-25 23:22:57.400290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.158 [2024-11-25 23:22:57.400303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:25.158 [2024-11-25 23:22:57.400313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:25.159 [2024-11-25 23:22:57.400322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.159 [2024-11-25 23:22:57.400349] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:25.159 [2024-11-25 23:22:57.404973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.159 [2024-11-25 23:22:57.405016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.159 [2024-11-25 23:22:57.405030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:21:25.159 [2024-11-25 23:22:57.405040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.159 [2024-11-25 23:22:57.405093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.159 [2024-11-25 23:22:57.405104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:25.159 [2024-11-25 23:22:57.405114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:25.159 [2024-11-25 23:22:57.405122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.159 [2024-11-25 23:22:57.405162] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:25.159 [2024-11-25 23:22:57.405194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:25.159 [2024-11-25 23:22:57.405238] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:25.159 [2024-11-25 23:22:57.405259] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:25.159 [2024-11-25 23:22:57.405372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:25.159 [2024-11-25 23:22:57.405386] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:25.159 [2024-11-25 23:22:57.405398] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:25.159 [2024-11-25 23:22:57.405410] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405421] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405430] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:25.159 [2024-11-25 23:22:57.405439] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:25.159 [2024-11-25 23:22:57.405447] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:25.159 [2024-11-25 23:22:57.405458] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:25.159 [2024-11-25 23:22:57.405468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.159 [2024-11-25 23:22:57.405476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:25.159 [2024-11-25 23:22:57.405484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:21:25.159 [2024-11-25 23:22:57.405492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.159 [2024-11-25 23:22:57.405578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.159 [2024-11-25 23:22:57.405587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:25.159 [2024-11-25 23:22:57.405595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:25.159 [2024-11-25 23:22:57.405604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.159 [2024-11-25 23:22:57.405713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:25.159 [2024-11-25 23:22:57.405725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:25.159 [2024-11-25 23:22:57.405734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:25.159 [2024-11-25 23:22:57.405760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:25.159 [2024-11-25 23:22:57.405784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.159 [2024-11-25 23:22:57.405799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:25.159 [2024-11-25 23:22:57.405806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:25.159 [2024-11-25 23:22:57.405818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:25.159 [2024-11-25 23:22:57.405832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:25.159 [2024-11-25 23:22:57.405840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:25.159 [2024-11-25 23:22:57.405847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:25.159 [2024-11-25 23:22:57.405862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:25.159 [2024-11-25 23:22:57.405882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:25.159 [2024-11-25 23:22:57.405903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:25.159 [2024-11-25 23:22:57.405923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:25.159 [2024-11-25 23:22:57.405943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:25.159 [2024-11-25 23:22:57.405958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:25.159 [2024-11-25 23:22:57.405966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:25.159 [2024-11-25 23:22:57.405972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.159 [2024-11-25 23:22:57.405978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:25.159 [2024-11-25 23:22:57.405985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:25.159 [2024-11-25 23:22:57.405992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:25.159 [2024-11-25 23:22:57.405999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:25.159 [2024-11-25 23:22:57.406006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:25.159 [2024-11-25 23:22:57.406012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.406018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:25.159 [2024-11-25 23:22:57.406025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:25.159 [2024-11-25 23:22:57.406033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.406040] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:25.159 [2024-11-25 23:22:57.406050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:25.159 [2024-11-25 23:22:57.406105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:25.159 [2024-11-25 23:22:57.406113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:25.159 [2024-11-25 23:22:57.406122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:25.159 [2024-11-25 23:22:57.406130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:25.159 [2024-11-25 23:22:57.406137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:25.159 [2024-11-25 23:22:57.406145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:25.159 [2024-11-25 23:22:57.406152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:25.159 [2024-11-25 23:22:57.406159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:25.159 [2024-11-25 23:22:57.406168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:25.159 [2024-11-25 23:22:57.406178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.159 [2024-11-25 23:22:57.406190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:25.159 [2024-11-25 23:22:57.406198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:25.159 [2024-11-25 23:22:57.406205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:25.159 [2024-11-25 23:22:57.406213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:25.159 [2024-11-25 23:22:57.406221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:25.159 [2024-11-25 23:22:57.406229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:25.159 [2024-11-25 23:22:57.406237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:25.159 [2024-11-25 23:22:57.406245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:25.159 [2024-11-25 23:22:57.406252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:25.159 [2024-11-25 23:22:57.406260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:25.159 [2024-11-25 23:22:57.406267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:25.159 [2024-11-25 23:22:57.406274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:25.159 [2024-11-25 23:22:57.406282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:25.159 [2024-11-25 23:22:57.406290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:25.160 [2024-11-25 23:22:57.406299] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:25.160 [2024-11-25 23:22:57.406307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:25.160 [2024-11-25 23:22:57.406316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:25.160 [2024-11-25 23:22:57.406323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:25.160 [2024-11-25 23:22:57.406332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:25.160 [2024-11-25 23:22:57.406341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:25.160 [2024-11-25 23:22:57.406349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.406359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:25.160 [2024-11-25 23:22:57.406368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:21:25.160 [2024-11-25 23:22:57.406379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.443884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.443936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.160 [2024-11-25 23:22:57.443947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.459 ms 00:21:25.160 [2024-11-25 23:22:57.443961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.444076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.444087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:25.160 [2024-11-25 23:22:57.444097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:25.160 [2024-11-25 23:22:57.444105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.493024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.493096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.160 [2024-11-25 23:22:57.493110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.852 ms 00:21:25.160 [2024-11-25 23:22:57.493120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.493170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.493181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.160 [2024-11-25 23:22:57.493194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:25.160 [2024-11-25 23:22:57.493203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.493900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.493949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.160 [2024-11-25 23:22:57.493962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:21:25.160 [2024-11-25 23:22:57.493971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.494156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.494170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:25.160 [2024-11-25 23:22:57.494190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:21:25.160 [2024-11-25 23:22:57.494199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.160 [2024-11-25 23:22:57.511507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.160 [2024-11-25 23:22:57.511549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:25.160 [2024-11-25 23:22:57.511564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.287 ms 00:21:25.160 [2024-11-25 23:22:57.511573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.526484] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:25.422 [2024-11-25 23:22:57.526534] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:25.422 [2024-11-25 23:22:57.526548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.526558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:25.422 [2024-11-25 23:22:57.526569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.863 ms 00:21:25.422 [2024-11-25 23:22:57.526577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.552910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.552965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:25.422 [2024-11-25 23:22:57.552977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.279 ms 00:21:25.422 [2024-11-25 23:22:57.552986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.565960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.566004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:25.422 [2024-11-25 23:22:57.566016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:21:25.422 [2024-11-25 23:22:57.566024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.578366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.578409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:25.422 [2024-11-25 23:22:57.578421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.288 ms 00:21:25.422 [2024-11-25 23:22:57.578429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.579095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.579117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:25.422 [2024-11-25 23:22:57.579128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:25.422 [2024-11-25 23:22:57.579141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.650032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.650108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:25.422 [2024-11-25 23:22:57.650123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.871 ms 00:21:25.422 [2024-11-25 23:22:57.650141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.661438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:25.422 [2024-11-25 23:22:57.664904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.665204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:25.422 [2024-11-25 23:22:57.665225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.706 ms 00:21:25.422 [2024-11-25 23:22:57.665235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.422 [2024-11-25 23:22:57.665332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.422 [2024-11-25 23:22:57.665346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:25.423 [2024-11-25 23:22:57.665356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:25.423 [2024-11-25 23:22:57.665365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.423 [2024-11-25 23:22:57.665439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.423 [2024-11-25 23:22:57.665451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:25.423 [2024-11-25 23:22:57.665461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:25.423 [2024-11-25 23:22:57.665469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.423 [2024-11-25 23:22:57.665491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.423 [2024-11-25 23:22:57.665500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:25.423 [2024-11-25 23:22:57.665509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:25.423 [2024-11-25 23:22:57.665516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.423 [2024-11-25 23:22:57.665555] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:25.423 [2024-11-25 23:22:57.665568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.423 [2024-11-25 23:22:57.665577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:25.423 [2024-11-25 23:22:57.665585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:25.423 [2024-11-25 23:22:57.665593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.423 [2024-11-25 23:22:57.691925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.423 [2024-11-25 23:22:57.691975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:25.423 [2024-11-25 23:22:57.691989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.314 ms 00:21:25.423 [2024-11-25 23:22:57.691998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.423 [2024-11-25 23:22:57.692113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.423 [2024-11-25 23:22:57.692125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:25.423 [2024-11-25 23:22:57.692135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:25.423 [2024-11-25 23:22:57.692143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.423 [2024-11-25 23:22:57.693569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.951 ms, result 0 00:21:26.365  [2024-11-25T23:22:59.778Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-25T23:23:00.719Z] Copying: 40/1024 [MB] (22 MBps) [2024-11-25T23:23:02.108Z] Copying: 62/1024 [MB] (22 MBps) [2024-11-25T23:23:03.054Z] Copying: 81/1024 [MB] (19 MBps) [2024-11-25T23:23:04.000Z] Copying: 101/1024 [MB] (19 MBps) [2024-11-25T23:23:04.944Z] Copying: 117/1024 [MB] (16 MBps) [2024-11-25T23:23:05.890Z] Copying: 136/1024 [MB] (18 MBps) [2024-11-25T23:23:06.836Z] Copying: 154/1024 [MB] (18 MBps) [2024-11-25T23:23:07.775Z] Copying: 176/1024 [MB] (21 MBps) [2024-11-25T23:23:08.720Z] Copying: 212/1024 [MB] (36 MBps) [2024-11-25T23:23:10.114Z] Copying: 233/1024 [MB] (21 MBps) [2024-11-25T23:23:11.059Z] Copying: 250/1024 [MB] (16 MBps) [2024-11-25T23:23:12.005Z] Copying: 271/1024 [MB] (21 MBps) [2024-11-25T23:23:12.950Z] Copying: 292/1024 [MB] (20 MBps) [2024-11-25T23:23:13.896Z] Copying: 308/1024 [MB] (15 MBps) [2024-11-25T23:23:14.840Z] Copying: 326/1024 [MB] (18 MBps) [2024-11-25T23:23:15.788Z] Copying: 347/1024 [MB] (21 MBps) [2024-11-25T23:23:16.736Z] Copying: 367/1024 [MB] (20 MBps) [2024-11-25T23:23:18.124Z] Copying: 390/1024 [MB] (22 MBps) [2024-11-25T23:23:19.068Z] Copying: 405/1024 [MB] (15 MBps) [2024-11-25T23:23:20.011Z] Copying: 421/1024 [MB] (15 MBps) [2024-11-25T23:23:20.955Z] Copying: 437/1024 [MB] (15 MBps) [2024-11-25T23:23:21.899Z] Copying: 449/1024 [MB] (12 MBps) [2024-11-25T23:23:22.844Z] Copying: 461/1024 [MB] (12 MBps) [2024-11-25T23:23:23.788Z] Copying: 474/1024 [MB] (13 MBps) [2024-11-25T23:23:24.733Z] Copying: 485/1024 [MB] (11 MBps) [2024-11-25T23:23:26.120Z] Copying: 498/1024 [MB] (12 MBps) [2024-11-25T23:23:27.066Z] Copying: 512/1024 [MB] (13 MBps) [2024-11-25T23:23:28.010Z] Copying: 530/1024 [MB] (17 MBps) [2024-11-25T23:23:28.955Z] Copying: 540/1024 [MB] (10 MBps) [2024-11-25T23:23:29.899Z] Copying: 551/1024 [MB] (11 MBps) [2024-11-25T23:23:30.842Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-25T23:23:31.873Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-25T23:23:32.835Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-25T23:23:33.780Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-25T23:23:34.724Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-25T23:23:36.110Z] Copying: 616/1024 [MB] (11 MBps) [2024-11-25T23:23:37.055Z] Copying: 627/1024 [MB] (11 MBps) [2024-11-25T23:23:37.997Z] Copying: 637/1024 [MB] (10 MBps) [2024-11-25T23:23:38.940Z] Copying: 649/1024 [MB] (11 MBps) [2024-11-25T23:23:39.882Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-25T23:23:40.826Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-25T23:23:41.771Z] Copying: 683/1024 [MB] (11 MBps) [2024-11-25T23:23:42.716Z] Copying: 695/1024 [MB] (11 MBps) [2024-11-25T23:23:44.105Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-25T23:23:45.048Z] Copying: 717/1024 [MB] (11 MBps) [2024-11-25T23:23:45.993Z] Copying: 729/1024 [MB] (11 MBps) [2024-11-25T23:23:46.937Z] Copying: 739/1024 [MB] (10 MBps) [2024-11-25T23:23:47.882Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-25T23:23:48.828Z] Copying: 762/1024 [MB] (11 MBps) [2024-11-25T23:23:49.780Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-25T23:23:50.724Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-25T23:23:52.112Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-25T23:23:53.057Z] Copying: 808/1024 [MB] (11 MBps) [2024-11-25T23:23:54.002Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-25T23:23:54.945Z] Copying: 831/1024 [MB] (12 MBps) [2024-11-25T23:23:55.889Z] Copying: 843/1024 [MB] (11 MBps) [2024-11-25T23:23:56.833Z] Copying: 855/1024 [MB] (12 MBps) [2024-11-25T23:23:57.777Z] Copying: 867/1024 [MB] (11 MBps) [2024-11-25T23:23:58.721Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-25T23:24:00.120Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-25T23:24:01.061Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-25T23:24:02.002Z] Copying: 917/1024 [MB] (12 MBps) [2024-11-25T23:24:02.966Z] Copying: 935/1024 [MB] (18 MBps) [2024-11-25T23:24:03.939Z] Copying: 958/1024 [MB] (23 MBps) [2024-11-25T23:24:04.882Z] Copying: 973/1024 [MB] (14 MBps) [2024-11-25T23:24:05.825Z] Copying: 987/1024 [MB] (14 MBps) [2024-11-25T23:24:06.770Z] Copying: 1001/1024 [MB] (13 MBps) [2024-11-25T23:24:07.031Z] Copying: 1017/1024 [MB] (16 MBps) [2024-11-25T23:24:07.031Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-25 23:24:06.979994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-25 23:24:06.980043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:34.662 [2024-11-25 23:24:06.980068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:34.662 [2024-11-25 23:24:06.980075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-25 23:24:06.980092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.662 [2024-11-25 23:24:06.982368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-25 23:24:06.982390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:34.662 [2024-11-25 23:24:06.982399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:22:34.662 [2024-11-25 23:24:06.982410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-25 23:24:06.983894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-25 23:24:06.983916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:34.662 [2024-11-25 23:24:06.983924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:22:34.662 [2024-11-25 23:24:06.983931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-25 23:24:06.997680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-25 23:24:06.997704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:34.662 [2024-11-25 23:24:06.997712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.736 ms 00:22:34.662 [2024-11-25 23:24:06.997718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.662 [2024-11-25 23:24:07.002325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.662 [2024-11-25 23:24:07.002344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:34.662 [2024-11-25 23:24:07.002352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:22:34.663 [2024-11-25 23:24:07.002359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.663 [2024-11-25 23:24:07.021173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.663 [2024-11-25 23:24:07.021196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:34.663 [2024-11-25 23:24:07.021204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.772 ms 00:22:34.663 [2024-11-25 23:24:07.021210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.033282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.923 [2024-11-25 23:24:07.033304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:34.923 [2024-11-25 23:24:07.033314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.046 ms 00:22:34.923 [2024-11-25 23:24:07.033321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.033418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.923 [2024-11-25 23:24:07.033431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:34.923 [2024-11-25 23:24:07.033438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:34.923 [2024-11-25 23:24:07.033444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.051503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.923 [2024-11-25 23:24:07.051524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:34.923 [2024-11-25 23:24:07.051533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.049 ms 00:22:34.923 [2024-11-25 23:24:07.051539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.069165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.923 [2024-11-25 23:24:07.069186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:34.923 [2024-11-25 23:24:07.069193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.602 ms 00:22:34.923 [2024-11-25 23:24:07.069199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.086195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.923 [2024-11-25 23:24:07.086257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:34.923 [2024-11-25 23:24:07.086264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.972 ms 00:22:34.923 [2024-11-25 23:24:07.086270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.103854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.923 [2024-11-25 23:24:07.103875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:34.923 [2024-11-25 23:24:07.103883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.541 ms 00:22:34.923 [2024-11-25 23:24:07.103888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.923 [2024-11-25 23:24:07.103912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:34.923 [2024-11-25 23:24:07.103924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.103996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:34.923 [2024-11-25 23:24:07.104096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:34.924 [2024-11-25 23:24:07.104542] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:34.924 [2024-11-25 23:24:07.104548] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a5516937-5c6e-4854-a6f9-4ae2e284b0cc 00:22:34.924 [2024-11-25 23:24:07.104556] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:34.924 [2024-11-25 23:24:07.104562] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:34.924 [2024-11-25 23:24:07.104567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:34.924 [2024-11-25 23:24:07.104574] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:34.924 [2024-11-25 23:24:07.104579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:34.924 [2024-11-25 23:24:07.104591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:34.924 [2024-11-25 23:24:07.104597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:34.924 [2024-11-25 23:24:07.104601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:34.924 [2024-11-25 23:24:07.104606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:34.924 [2024-11-25 23:24:07.104612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.924 [2024-11-25 23:24:07.104618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:34.924 [2024-11-25 23:24:07.104624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:22:34.924 [2024-11-25 23:24:07.104629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.114786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.924 [2024-11-25 23:24:07.114807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:34.924 [2024-11-25 23:24:07.114815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.130 ms 00:22:34.924 [2024-11-25 23:24:07.114821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.115118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.924 [2024-11-25 23:24:07.115127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:34.924 [2024-11-25 23:24:07.115134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:34.924 [2024-11-25 23:24:07.115144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.142435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.142459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.924 [2024-11-25 23:24:07.142466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.142473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.142518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.142525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.924 [2024-11-25 23:24:07.142531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.142540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.142583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.142590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.924 [2024-11-25 23:24:07.142597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.142603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.142615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.142622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.924 [2024-11-25 23:24:07.142628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.142633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.204325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.204355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.924 [2024-11-25 23:24:07.204365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.204371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.924 [2024-11-25 23:24:07.255356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.924 [2024-11-25 23:24:07.255450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.924 [2024-11-25 23:24:07.255498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.924 [2024-11-25 23:24:07.255598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:34.924 [2024-11-25 23:24:07.255643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.924 [2024-11-25 23:24:07.255700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.924 [2024-11-25 23:24:07.255752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.924 [2024-11-25 23:24:07.255759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.924 [2024-11-25 23:24:07.255765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.924 [2024-11-25 23:24:07.255873] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.849 ms, result 0 00:22:35.867 00:22:35.867 00:22:35.867 23:24:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:36.126 [2024-11-25 23:24:08.269954] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:22:36.126 [2024-11-25 23:24:08.270091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78410 ] 00:22:36.126 [2024-11-25 23:24:08.427254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.385 [2024-11-25 23:24:08.524293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.643 [2024-11-25 23:24:08.750093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:36.643 [2024-11-25 23:24:08.750143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:36.643 [2024-11-25 23:24:08.902915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.902956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:36.643 [2024-11-25 23:24:08.902968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:36.643 [2024-11-25 23:24:08.902975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.903009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.903019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:36.643 [2024-11-25 23:24:08.903026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:36.643 [2024-11-25 23:24:08.903032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.903045] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:36.643 [2024-11-25 23:24:08.903569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:36.643 [2024-11-25 23:24:08.903584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.903591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:36.643 [2024-11-25 23:24:08.903598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:22:36.643 [2024-11-25 23:24:08.903604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.904837] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:36.643 [2024-11-25 23:24:08.914918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.914947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:36.643 [2024-11-25 23:24:08.914956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.082 ms 00:22:36.643 [2024-11-25 23:24:08.914962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.915008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.915017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:36.643 [2024-11-25 23:24:08.915024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:36.643 [2024-11-25 23:24:08.915029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.921050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.921089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:36.643 [2024-11-25 23:24:08.921097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.968 ms 00:22:36.643 [2024-11-25 23:24:08.921106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.921162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.921170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:36.643 [2024-11-25 23:24:08.921177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:36.643 [2024-11-25 23:24:08.921184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.921216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.921224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:36.643 [2024-11-25 23:24:08.921231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:36.643 [2024-11-25 23:24:08.921237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.643 [2024-11-25 23:24:08.921254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:36.643 [2024-11-25 23:24:08.924245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.643 [2024-11-25 23:24:08.924270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:36.644 [2024-11-25 23:24:08.924280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.996 ms 00:22:36.644 [2024-11-25 23:24:08.924286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.644 [2024-11-25 23:24:08.924316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.644 [2024-11-25 23:24:08.924323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:36.644 [2024-11-25 23:24:08.924330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:36.644 [2024-11-25 23:24:08.924335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.644 [2024-11-25 23:24:08.924349] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:36.644 [2024-11-25 23:24:08.924365] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:36.644 [2024-11-25 23:24:08.924394] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:36.644 [2024-11-25 23:24:08.924407] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:36.644 [2024-11-25 23:24:08.924489] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:36.644 [2024-11-25 23:24:08.924498] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:36.644 [2024-11-25 23:24:08.924507] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:36.644 [2024-11-25 23:24:08.924515] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924522] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924528] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:36.644 [2024-11-25 23:24:08.924535] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:36.644 [2024-11-25 23:24:08.924540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:36.644 [2024-11-25 23:24:08.924548] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:36.644 [2024-11-25 23:24:08.924554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.644 [2024-11-25 23:24:08.924561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:36.644 [2024-11-25 23:24:08.924567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:22:36.644 [2024-11-25 23:24:08.924573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.644 [2024-11-25 23:24:08.924636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.644 [2024-11-25 23:24:08.924643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:36.644 [2024-11-25 23:24:08.924649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:36.644 [2024-11-25 23:24:08.924654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.644 [2024-11-25 23:24:08.924731] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:36.644 [2024-11-25 23:24:08.924739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:36.644 [2024-11-25 23:24:08.924745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:36.644 [2024-11-25 23:24:08.924764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:36.644 [2024-11-25 23:24:08.924780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:36.644 [2024-11-25 23:24:08.924791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:36.644 [2024-11-25 23:24:08.924797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:36.644 [2024-11-25 23:24:08.924801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:36.644 [2024-11-25 23:24:08.924812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:36.644 [2024-11-25 23:24:08.924818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:36.644 [2024-11-25 23:24:08.924825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:36.644 [2024-11-25 23:24:08.924835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:36.644 [2024-11-25 23:24:08.924849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:36.644 [2024-11-25 23:24:08.924873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:36.644 [2024-11-25 23:24:08.924889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:36.644 [2024-11-25 23:24:08.924905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:36.644 [2024-11-25 23:24:08.924920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:36.644 [2024-11-25 23:24:08.924930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:36.644 [2024-11-25 23:24:08.924935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:36.644 [2024-11-25 23:24:08.924940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:36.644 [2024-11-25 23:24:08.924945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:36.644 [2024-11-25 23:24:08.924950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:36.644 [2024-11-25 23:24:08.924955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:36.644 [2024-11-25 23:24:08.924965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:36.644 [2024-11-25 23:24:08.924969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924974] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:36.644 [2024-11-25 23:24:08.924980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:36.644 [2024-11-25 23:24:08.924985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:36.644 [2024-11-25 23:24:08.924991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:36.644 [2024-11-25 23:24:08.924997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:36.644 [2024-11-25 23:24:08.925004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:36.644 [2024-11-25 23:24:08.925009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:36.644 [2024-11-25 23:24:08.925014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:36.644 [2024-11-25 23:24:08.925019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:36.644 [2024-11-25 23:24:08.925024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:36.644 [2024-11-25 23:24:08.925031] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:36.644 [2024-11-25 23:24:08.925037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:36.644 [2024-11-25 23:24:08.925046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:36.644 [2024-11-25 23:24:08.925053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:36.644 [2024-11-25 23:24:08.925071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:36.644 [2024-11-25 23:24:08.925077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:36.644 [2024-11-25 23:24:08.925083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:36.644 [2024-11-25 23:24:08.925088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:36.644 [2024-11-25 23:24:08.925094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:36.644 [2024-11-25 23:24:08.925100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:36.644 [2024-11-25 23:24:08.925105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:36.644 [2024-11-25 23:24:08.925111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:36.644 [2024-11-25 23:24:08.925116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:36.644 [2024-11-25 23:24:08.925123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:36.644 [2024-11-25 23:24:08.925128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:36.644 [2024-11-25 23:24:08.925134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:36.644 [2024-11-25 23:24:08.925140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:36.644 [2024-11-25 23:24:08.925146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:36.644 [2024-11-25 23:24:08.925152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:36.645 [2024-11-25 23:24:08.925158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:36.645 [2024-11-25 23:24:08.925163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:36.645 [2024-11-25 23:24:08.925169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:36.645 [2024-11-25 23:24:08.925175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.925181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:36.645 [2024-11-25 23:24:08.925187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:22:36.645 [2024-11-25 23:24:08.925192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:08.949234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.949260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:36.645 [2024-11-25 23:24:08.949269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.993 ms 00:22:36.645 [2024-11-25 23:24:08.949278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:08.949344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.949351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:36.645 [2024-11-25 23:24:08.949357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:36.645 [2024-11-25 23:24:08.949364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:08.987626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.987657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:36.645 [2024-11-25 23:24:08.987667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.222 ms 00:22:36.645 [2024-11-25 23:24:08.987674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:08.987706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.987714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:36.645 [2024-11-25 23:24:08.987724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:36.645 [2024-11-25 23:24:08.987730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:08.988149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.988162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:36.645 [2024-11-25 23:24:08.988170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:22:36.645 [2024-11-25 23:24:08.988176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:08.988285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:08.988293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:36.645 [2024-11-25 23:24:08.988299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:36.645 [2024-11-25 23:24:08.988310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.645 [2024-11-25 23:24:09.000071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.645 [2024-11-25 23:24:09.000097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:36.645 [2024-11-25 23:24:09.000107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.745 ms 00:22:36.645 [2024-11-25 23:24:09.000113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.010269] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:36.906 [2024-11-25 23:24:09.010298] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:36.906 [2024-11-25 23:24:09.010308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.010315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:36.906 [2024-11-25 23:24:09.010323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.109 ms 00:22:36.906 [2024-11-25 23:24:09.010328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.028901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.028929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:36.906 [2024-11-25 23:24:09.028938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.541 ms 00:22:36.906 [2024-11-25 23:24:09.028944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.038005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.038031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:36.906 [2024-11-25 23:24:09.038039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.021 ms 00:22:36.906 [2024-11-25 23:24:09.038045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.046854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.046879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:36.906 [2024-11-25 23:24:09.046887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.776 ms 00:22:36.906 [2024-11-25 23:24:09.046892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.047371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.047394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:36.906 [2024-11-25 23:24:09.047403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:22:36.906 [2024-11-25 23:24:09.047409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.094779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.094812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:36.906 [2024-11-25 23:24:09.094825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.356 ms 00:22:36.906 [2024-11-25 23:24:09.094832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.103591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:36.906 [2024-11-25 23:24:09.105951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.105975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:36.906 [2024-11-25 23:24:09.105984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.087 ms 00:22:36.906 [2024-11-25 23:24:09.105991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.106045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.106063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:36.906 [2024-11-25 23:24:09.106071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:36.906 [2024-11-25 23:24:09.106079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.106148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.106158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:36.906 [2024-11-25 23:24:09.106165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:36.906 [2024-11-25 23:24:09.106171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.106188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.106195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:36.906 [2024-11-25 23:24:09.106201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:36.906 [2024-11-25 23:24:09.106208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.106238] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:36.906 [2024-11-25 23:24:09.106246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.106252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:36.906 [2024-11-25 23:24:09.106260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:36.906 [2024-11-25 23:24:09.106266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.124909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.124935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:36.906 [2024-11-25 23:24:09.124944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.628 ms 00:22:36.906 [2024-11-25 23:24:09.124954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.125011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.906 [2024-11-25 23:24:09.125018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:36.906 [2024-11-25 23:24:09.125025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:36.906 [2024-11-25 23:24:09.125031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.906 [2024-11-25 23:24:09.125960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.679 ms, result 0 00:22:38.294  [2024-11-25T23:24:11.607Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-25T23:24:12.552Z] Copying: 37/1024 [MB] (19 MBps) [2024-11-25T23:24:13.498Z] Copying: 53/1024 [MB] (15 MBps) [2024-11-25T23:24:14.440Z] Copying: 72/1024 [MB] (18 MBps) [2024-11-25T23:24:15.386Z] Copying: 92/1024 [MB] (20 MBps) [2024-11-25T23:24:16.329Z] Copying: 109/1024 [MB] (17 MBps) [2024-11-25T23:24:17.273Z] Copying: 124/1024 [MB] (14 MBps) [2024-11-25T23:24:18.658Z] Copying: 137/1024 [MB] (12 MBps) [2024-11-25T23:24:19.600Z] Copying: 148/1024 [MB] (11 MBps) [2024-11-25T23:24:20.542Z] Copying: 160/1024 [MB] (11 MBps) [2024-11-25T23:24:21.484Z] Copying: 173/1024 [MB] (12 MBps) [2024-11-25T23:24:22.429Z] Copying: 185/1024 [MB] (12 MBps) [2024-11-25T23:24:23.391Z] Copying: 198/1024 [MB] (12 MBps) [2024-11-25T23:24:24.335Z] Copying: 210/1024 [MB] (12 MBps) [2024-11-25T23:24:25.275Z] Copying: 222/1024 [MB] (11 MBps) [2024-11-25T23:24:26.660Z] Copying: 234/1024 [MB] (12 MBps) [2024-11-25T23:24:27.603Z] Copying: 246/1024 [MB] (12 MBps) [2024-11-25T23:24:28.546Z] Copying: 258/1024 [MB] (12 MBps) [2024-11-25T23:24:29.489Z] Copying: 271/1024 [MB] (12 MBps) [2024-11-25T23:24:30.433Z] Copying: 282/1024 [MB] (11 MBps) [2024-11-25T23:24:31.377Z] Copying: 294/1024 [MB] (12 MBps) [2024-11-25T23:24:32.322Z] Copying: 306/1024 [MB] (12 MBps) [2024-11-25T23:24:33.709Z] Copying: 318/1024 [MB] (12 MBps) [2024-11-25T23:24:34.382Z] Copying: 330/1024 [MB] (11 MBps) [2024-11-25T23:24:35.348Z] Copying: 342/1024 [MB] (11 MBps) [2024-11-25T23:24:36.292Z] Copying: 354/1024 [MB] (12 MBps) [2024-11-25T23:24:37.678Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-25T23:24:38.622Z] Copying: 377/1024 [MB] (11 MBps) [2024-11-25T23:24:39.564Z] Copying: 388/1024 [MB] (10 MBps) [2024-11-25T23:24:40.505Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-25T23:24:41.446Z] Copying: 415/1024 [MB] (16 MBps) [2024-11-25T23:24:42.404Z] Copying: 432/1024 [MB] (17 MBps) [2024-11-25T23:24:43.348Z] Copying: 446/1024 [MB] (13 MBps) [2024-11-25T23:24:44.290Z] Copying: 458/1024 [MB] (12 MBps) [2024-11-25T23:24:45.674Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-25T23:24:46.617Z] Copying: 481/1024 [MB] (11 MBps) [2024-11-25T23:24:47.561Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-25T23:24:48.507Z] Copying: 505/1024 [MB] (12 MBps) [2024-11-25T23:24:49.450Z] Copying: 517/1024 [MB] (11 MBps) [2024-11-25T23:24:50.394Z] Copying: 528/1024 [MB] (11 MBps) [2024-11-25T23:24:51.338Z] Copying: 540/1024 [MB] (12 MBps) [2024-11-25T23:24:52.283Z] Copying: 552/1024 [MB] (12 MBps) [2024-11-25T23:24:53.672Z] Copying: 563/1024 [MB] (10 MBps) [2024-11-25T23:24:54.614Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-25T23:24:55.558Z] Copying: 587/1024 [MB] (12 MBps) [2024-11-25T23:24:56.503Z] Copying: 599/1024 [MB] (11 MBps) [2024-11-25T23:24:57.446Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-25T23:24:58.389Z] Copying: 622/1024 [MB] (11 MBps) [2024-11-25T23:24:59.339Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-25T23:25:00.282Z] Copying: 645/1024 [MB] (11 MBps) [2024-11-25T23:25:01.670Z] Copying: 657/1024 [MB] (11 MBps) [2024-11-25T23:25:02.615Z] Copying: 669/1024 [MB] (12 MBps) [2024-11-25T23:25:03.559Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-25T23:25:04.499Z] Copying: 692/1024 [MB] (11 MBps) [2024-11-25T23:25:05.438Z] Copying: 704/1024 [MB] (11 MBps) [2024-11-25T23:25:06.451Z] Copying: 716/1024 [MB] (11 MBps) [2024-11-25T23:25:07.411Z] Copying: 728/1024 [MB] (11 MBps) [2024-11-25T23:25:08.355Z] Copying: 740/1024 [MB] (11 MBps) [2024-11-25T23:25:09.296Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-25T23:25:10.683Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-25T23:25:11.629Z] Copying: 779/1024 [MB] (16 MBps) [2024-11-25T23:25:12.574Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-25T23:25:13.519Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-25T23:25:14.463Z] Copying: 812/1024 [MB] (10 MBps) [2024-11-25T23:25:15.407Z] Copying: 823/1024 [MB] (11 MBps) [2024-11-25T23:25:16.351Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-25T23:25:17.294Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-25T23:25:18.681Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-25T23:25:19.625Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-25T23:25:20.568Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-25T23:25:21.512Z] Copying: 892/1024 [MB] (11 MBps) [2024-11-25T23:25:22.455Z] Copying: 903/1024 [MB] (11 MBps) [2024-11-25T23:25:23.398Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-25T23:25:24.343Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-25T23:25:25.287Z] Copying: 935/1024 [MB] (10 MBps) [2024-11-25T23:25:26.674Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-25T23:25:27.619Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-25T23:25:28.562Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-25T23:25:29.502Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-25T23:25:30.445Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-25T23:25:31.387Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-25T23:25:32.330Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-25T23:25:32.330Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-25 23:25:32.224098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.224169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:59.961 [2024-11-25 23:25:32.224186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.961 [2024-11-25 23:25:32.224196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.224220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:59.961 [2024-11-25 23:25:32.227947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.227984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:59.961 [2024-11-25 23:25:32.228005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:23:59.961 [2024-11-25 23:25:32.228173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.228438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.228458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:59.961 [2024-11-25 23:25:32.228469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:23:59.961 [2024-11-25 23:25:32.228478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.233168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.233193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:59.961 [2024-11-25 23:25:32.233206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:23:59.961 [2024-11-25 23:25:32.233219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.239478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.239503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:59.961 [2024-11-25 23:25:32.239511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:23:59.961 [2024-11-25 23:25:32.239517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.259167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.259195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:59.961 [2024-11-25 23:25:32.259203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.600 ms 00:23:59.961 [2024-11-25 23:25:32.259209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.270856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.270883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:59.961 [2024-11-25 23:25:32.270892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.619 ms 00:23:59.961 [2024-11-25 23:25:32.270900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.271007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.271015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:59.961 [2024-11-25 23:25:32.271023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:59.961 [2024-11-25 23:25:32.271031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.289597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.289629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:59.961 [2024-11-25 23:25:32.289637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.555 ms 00:23:59.961 [2024-11-25 23:25:32.289643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.961 [2024-11-25 23:25:32.307610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.961 [2024-11-25 23:25:32.307634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:59.961 [2024-11-25 23:25:32.307642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:23:59.961 [2024-11-25 23:25:32.307648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.224 [2024-11-25 23:25:32.325374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.224 [2024-11-25 23:25:32.325397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:00.224 [2024-11-25 23:25:32.325405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.701 ms 00:24:00.224 [2024-11-25 23:25:32.325411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.224 [2024-11-25 23:25:32.343103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.224 [2024-11-25 23:25:32.343125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:00.224 [2024-11-25 23:25:32.343133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.638 ms 00:24:00.224 [2024-11-25 23:25:32.343139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.224 [2024-11-25 23:25:32.343164] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:00.224 [2024-11-25 23:25:32.343179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:00.224 [2024-11-25 23:25:32.343603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:00.225 [2024-11-25 23:25:32.343774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:00.225 [2024-11-25 23:25:32.343782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a5516937-5c6e-4854-a6f9-4ae2e284b0cc 00:24:00.225 [2024-11-25 23:25:32.343788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:00.225 [2024-11-25 23:25:32.343794] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:00.225 [2024-11-25 23:25:32.343799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:00.225 [2024-11-25 23:25:32.343805] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:00.225 [2024-11-25 23:25:32.343816] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:00.225 [2024-11-25 23:25:32.343823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:00.225 [2024-11-25 23:25:32.343830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:00.225 [2024-11-25 23:25:32.343836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:00.225 [2024-11-25 23:25:32.343841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:00.225 [2024-11-25 23:25:32.343846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.225 [2024-11-25 23:25:32.343852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:00.225 [2024-11-25 23:25:32.343859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:24:00.225 [2024-11-25 23:25:32.343865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.353758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.225 [2024-11-25 23:25:32.353783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:00.225 [2024-11-25 23:25:32.353792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.878 ms 00:24:00.225 [2024-11-25 23:25:32.353798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.354094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.225 [2024-11-25 23:25:32.354107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:00.225 [2024-11-25 23:25:32.354118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:24:00.225 [2024-11-25 23:25:32.354124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.381492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.381516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.225 [2024-11-25 23:25:32.381525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.381532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.381571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.381578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.225 [2024-11-25 23:25:32.381588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.381593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.381636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.381644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.225 [2024-11-25 23:25:32.381650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.381656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.381668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.381675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.225 [2024-11-25 23:25:32.381681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.381690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.443933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.443964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.225 [2024-11-25 23:25:32.443974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.443980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.494803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.494837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.225 [2024-11-25 23:25:32.494847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.494858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.494923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.494931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.225 [2024-11-25 23:25:32.494938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.494944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.494974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.494982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.225 [2024-11-25 23:25:32.494989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.494996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.495081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.495089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.225 [2024-11-25 23:25:32.495097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.495103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.495128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.495136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:00.225 [2024-11-25 23:25:32.495142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.495149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.495185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.495192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.225 [2024-11-25 23:25:32.495198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.495205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.495242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.225 [2024-11-25 23:25:32.495250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.225 [2024-11-25 23:25:32.495257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.225 [2024-11-25 23:25:32.495262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.225 [2024-11-25 23:25:32.495370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.250 ms, result 0 00:24:00.795 00:24:00.795 00:24:00.795 23:25:33 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:02.704 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:02.704 23:25:34 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:02.705 [2024-11-25 23:25:34.782758] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:24:02.705 [2024-11-25 23:25:34.782875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79293 ] 00:24:02.705 [2024-11-25 23:25:34.938795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.705 [2024-11-25 23:25:35.032223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.964 [2024-11-25 23:25:35.259552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.964 [2024-11-25 23:25:35.259604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:03.225 [2024-11-25 23:25:35.414923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.225 [2024-11-25 23:25:35.414963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:03.225 [2024-11-25 23:25:35.414973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:03.225 [2024-11-25 23:25:35.414980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.225 [2024-11-25 23:25:35.415019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.225 [2024-11-25 23:25:35.415029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.225 [2024-11-25 23:25:35.415036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:03.225 [2024-11-25 23:25:35.415042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.225 [2024-11-25 23:25:35.415064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:03.225 [2024-11-25 23:25:35.415688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:03.225 [2024-11-25 23:25:35.415711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.225 [2024-11-25 23:25:35.415717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.225 [2024-11-25 23:25:35.415724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:24:03.225 [2024-11-25 23:25:35.415730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.225 [2024-11-25 23:25:35.417021] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:03.225 [2024-11-25 23:25:35.427308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.225 [2024-11-25 23:25:35.427335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:03.225 [2024-11-25 23:25:35.427343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.288 ms 00:24:03.225 [2024-11-25 23:25:35.427350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.225 [2024-11-25 23:25:35.427395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.225 [2024-11-25 23:25:35.427403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:03.226 [2024-11-25 23:25:35.427410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:03.226 [2024-11-25 23:25:35.427416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.433589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.433612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.226 [2024-11-25 23:25:35.433620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:24:03.226 [2024-11-25 23:25:35.433629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.433686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.433693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.226 [2024-11-25 23:25:35.433699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:03.226 [2024-11-25 23:25:35.433705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.433738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.433745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:03.226 [2024-11-25 23:25:35.433752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:03.226 [2024-11-25 23:25:35.433758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.433777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:03.226 [2024-11-25 23:25:35.436806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.436827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.226 [2024-11-25 23:25:35.436836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:24:03.226 [2024-11-25 23:25:35.436842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.436877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.436884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:03.226 [2024-11-25 23:25:35.436891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:03.226 [2024-11-25 23:25:35.436896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.436911] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:03.226 [2024-11-25 23:25:35.436926] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:03.226 [2024-11-25 23:25:35.436955] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:03.226 [2024-11-25 23:25:35.436970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:03.226 [2024-11-25 23:25:35.437051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:03.226 [2024-11-25 23:25:35.437070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:03.226 [2024-11-25 23:25:35.437079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:03.226 [2024-11-25 23:25:35.437087] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437094] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:03.226 [2024-11-25 23:25:35.437106] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:03.226 [2024-11-25 23:25:35.437111] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:03.226 [2024-11-25 23:25:35.437120] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:03.226 [2024-11-25 23:25:35.437127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.437133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:03.226 [2024-11-25 23:25:35.437139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:03.226 [2024-11-25 23:25:35.437144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.437207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.226 [2024-11-25 23:25:35.437214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:03.226 [2024-11-25 23:25:35.437220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:03.226 [2024-11-25 23:25:35.437225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.226 [2024-11-25 23:25:35.437303] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:03.226 [2024-11-25 23:25:35.437311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:03.226 [2024-11-25 23:25:35.437318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:03.226 [2024-11-25 23:25:35.437335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:03.226 [2024-11-25 23:25:35.437351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.226 [2024-11-25 23:25:35.437361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:03.226 [2024-11-25 23:25:35.437369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:03.226 [2024-11-25 23:25:35.437374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.226 [2024-11-25 23:25:35.437384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:03.226 [2024-11-25 23:25:35.437389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:03.226 [2024-11-25 23:25:35.437394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:03.226 [2024-11-25 23:25:35.437405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:03.226 [2024-11-25 23:25:35.437423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:03.226 [2024-11-25 23:25:35.437438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:03.226 [2024-11-25 23:25:35.437453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:03.226 [2024-11-25 23:25:35.437468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:03.226 [2024-11-25 23:25:35.437483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.226 [2024-11-25 23:25:35.437494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:03.226 [2024-11-25 23:25:35.437498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:03.226 [2024-11-25 23:25:35.437503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.226 [2024-11-25 23:25:35.437509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:03.226 [2024-11-25 23:25:35.437514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:03.226 [2024-11-25 23:25:35.437519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:03.226 [2024-11-25 23:25:35.437529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:03.226 [2024-11-25 23:25:35.437534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437540] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:03.226 [2024-11-25 23:25:35.437546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:03.226 [2024-11-25 23:25:35.437554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.226 [2024-11-25 23:25:35.437566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:03.226 [2024-11-25 23:25:35.437571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:03.226 [2024-11-25 23:25:35.437576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:03.226 [2024-11-25 23:25:35.437582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:03.226 [2024-11-25 23:25:35.437587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:03.226 [2024-11-25 23:25:35.437592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:03.226 [2024-11-25 23:25:35.437598] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:03.226 [2024-11-25 23:25:35.437607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.226 [2024-11-25 23:25:35.437615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:03.226 [2024-11-25 23:25:35.437621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:03.226 [2024-11-25 23:25:35.437626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:03.226 [2024-11-25 23:25:35.437631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:03.227 [2024-11-25 23:25:35.437636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:03.227 [2024-11-25 23:25:35.437642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:03.227 [2024-11-25 23:25:35.437647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:03.227 [2024-11-25 23:25:35.437653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:03.227 [2024-11-25 23:25:35.437658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:03.227 [2024-11-25 23:25:35.437663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:03.227 [2024-11-25 23:25:35.437668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:03.227 [2024-11-25 23:25:35.437674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:03.227 [2024-11-25 23:25:35.437679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:03.227 [2024-11-25 23:25:35.437684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:03.227 [2024-11-25 23:25:35.437698] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:03.227 [2024-11-25 23:25:35.437705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.227 [2024-11-25 23:25:35.437711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:03.227 [2024-11-25 23:25:35.437716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:03.227 [2024-11-25 23:25:35.437721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:03.227 [2024-11-25 23:25:35.437726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:03.227 [2024-11-25 23:25:35.437733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.437739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:03.227 [2024-11-25 23:25:35.437746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:24:03.227 [2024-11-25 23:25:35.437752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.461942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.461968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.227 [2024-11-25 23:25:35.461977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.145 ms 00:24:03.227 [2024-11-25 23:25:35.461986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.462051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.462068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:03.227 [2024-11-25 23:25:35.462074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:03.227 [2024-11-25 23:25:35.462081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.499180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.499212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.227 [2024-11-25 23:25:35.499221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.058 ms 00:24:03.227 [2024-11-25 23:25:35.499228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.499261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.499269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.227 [2024-11-25 23:25:35.499279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:03.227 [2024-11-25 23:25:35.499284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.499704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.499718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.227 [2024-11-25 23:25:35.499725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:24:03.227 [2024-11-25 23:25:35.499731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.499842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.499850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.227 [2024-11-25 23:25:35.499856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:03.227 [2024-11-25 23:25:35.499866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.511778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.511804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.227 [2024-11-25 23:25:35.511813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.895 ms 00:24:03.227 [2024-11-25 23:25:35.511819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.522486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:03.227 [2024-11-25 23:25:35.522513] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:03.227 [2024-11-25 23:25:35.522523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.522530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:03.227 [2024-11-25 23:25:35.522536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.557 ms 00:24:03.227 [2024-11-25 23:25:35.522542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.541197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.541224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:03.227 [2024-11-25 23:25:35.541233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.623 ms 00:24:03.227 [2024-11-25 23:25:35.541240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.550583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.550608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:03.227 [2024-11-25 23:25:35.550616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.305 ms 00:24:03.227 [2024-11-25 23:25:35.550621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.559610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.559634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:03.227 [2024-11-25 23:25:35.559642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.964 ms 00:24:03.227 [2024-11-25 23:25:35.559648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.227 [2024-11-25 23:25:35.560119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.227 [2024-11-25 23:25:35.560140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:03.227 [2024-11-25 23:25:35.560150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:24:03.227 [2024-11-25 23:25:35.560155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.608136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.608167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:03.489 [2024-11-25 23:25:35.608182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.967 ms 00:24:03.489 [2024-11-25 23:25:35.608189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.616477] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:03.489 [2024-11-25 23:25:35.618713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.618737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:03.489 [2024-11-25 23:25:35.618746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.491 ms 00:24:03.489 [2024-11-25 23:25:35.618754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.618811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.618819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:03.489 [2024-11-25 23:25:35.618827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:03.489 [2024-11-25 23:25:35.618835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.618905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.618915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:03.489 [2024-11-25 23:25:35.618922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:03.489 [2024-11-25 23:25:35.618929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.618944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.618951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:03.489 [2024-11-25 23:25:35.618958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:03.489 [2024-11-25 23:25:35.618964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.618993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:03.489 [2024-11-25 23:25:35.619001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.619007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:03.489 [2024-11-25 23:25:35.619014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:03.489 [2024-11-25 23:25:35.619020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.637504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.637531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:03.489 [2024-11-25 23:25:35.637540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.472 ms 00:24:03.489 [2024-11-25 23:25:35.637550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.637607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.489 [2024-11-25 23:25:35.637615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:03.489 [2024-11-25 23:25:35.637622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:03.489 [2024-11-25 23:25:35.637628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.489 [2024-11-25 23:25:35.638811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.508 ms, result 0 00:24:04.431  [2024-11-25T23:25:37.784Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-25T23:25:38.770Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-25T23:25:39.710Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-25T23:25:41.093Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-25T23:25:41.666Z] Copying: 54/1024 [MB] (11 MBps) [2024-11-25T23:25:43.053Z] Copying: 66/1024 [MB] (11 MBps) [2024-11-25T23:25:43.997Z] Copying: 78/1024 [MB] (11 MBps) [2024-11-25T23:25:44.940Z] Copying: 89/1024 [MB] (11 MBps) [2024-11-25T23:25:45.884Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-25T23:25:46.827Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-25T23:25:47.769Z] Copying: 122/1024 [MB] (11 MBps) [2024-11-25T23:25:48.714Z] Copying: 134/1024 [MB] (11 MBps) [2024-11-25T23:25:49.657Z] Copying: 145/1024 [MB] (11 MBps) [2024-11-25T23:25:51.043Z] Copying: 157/1024 [MB] (11 MBps) [2024-11-25T23:25:51.987Z] Copying: 168/1024 [MB] (11 MBps) [2024-11-25T23:25:52.928Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-25T23:25:53.870Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-25T23:25:54.812Z] Copying: 203/1024 [MB] (11 MBps) [2024-11-25T23:25:55.752Z] Copying: 214/1024 [MB] (11 MBps) [2024-11-25T23:25:56.696Z] Copying: 225/1024 [MB] (11 MBps) [2024-11-25T23:25:58.082Z] Copying: 236/1024 [MB] (11 MBps) [2024-11-25T23:25:59.026Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-25T23:25:59.970Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-25T23:26:00.913Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-25T23:26:01.857Z] Copying: 280/1024 [MB] (11 MBps) [2024-11-25T23:26:02.798Z] Copying: 298/1024 [MB] (18 MBps) [2024-11-25T23:26:03.742Z] Copying: 310/1024 [MB] (11 MBps) [2024-11-25T23:26:04.685Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-25T23:26:06.073Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-25T23:26:07.017Z] Copying: 342/1024 [MB] (11 MBps) [2024-11-25T23:26:07.962Z] Copying: 355/1024 [MB] (12 MBps) [2024-11-25T23:26:08.904Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-25T23:26:09.875Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-25T23:26:10.832Z] Copying: 390/1024 [MB] (11 MBps) [2024-11-25T23:26:11.775Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-25T23:26:12.718Z] Copying: 413/1024 [MB] (12 MBps) [2024-11-25T23:26:13.660Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-25T23:26:15.045Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-25T23:26:15.988Z] Copying: 449/1024 [MB] (12 MBps) [2024-11-25T23:26:16.931Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-25T23:26:17.875Z] Copying: 472/1024 [MB] (11 MBps) [2024-11-25T23:26:18.818Z] Copying: 484/1024 [MB] (11 MBps) [2024-11-25T23:26:19.762Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-25T23:26:20.705Z] Copying: 507/1024 [MB] (11 MBps) [2024-11-25T23:26:22.093Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-25T23:26:22.666Z] Copying: 529/1024 [MB] (10 MBps) [2024-11-25T23:26:24.052Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-25T23:26:24.994Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-25T23:26:25.937Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-25T23:26:26.877Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-25T23:26:27.820Z] Copying: 584/1024 [MB] (11 MBps) [2024-11-25T23:26:28.764Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-25T23:26:29.710Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-25T23:26:31.093Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-25T23:26:31.665Z] Copying: 629/1024 [MB] (11 MBps) [2024-11-25T23:26:33.054Z] Copying: 640/1024 [MB] (11 MBps) [2024-11-25T23:26:34.000Z] Copying: 652/1024 [MB] (11 MBps) [2024-11-25T23:26:34.944Z] Copying: 664/1024 [MB] (11 MBps) [2024-11-25T23:26:35.889Z] Copying: 675/1024 [MB] (11 MBps) [2024-11-25T23:26:36.834Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-25T23:26:37.779Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-25T23:26:38.725Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-25T23:26:39.671Z] Copying: 720/1024 [MB] (10 MBps) [2024-11-25T23:26:41.060Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-25T23:26:41.723Z] Copying: 742/1024 [MB] (10 MBps) [2024-11-25T23:26:42.676Z] Copying: 754/1024 [MB] (11 MBps) [2024-11-25T23:26:44.064Z] Copying: 765/1024 [MB] (11 MBps) [2024-11-25T23:26:45.009Z] Copying: 777/1024 [MB] (11 MBps) [2024-11-25T23:26:45.953Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-25T23:26:46.898Z] Copying: 800/1024 [MB] (11 MBps) [2024-11-25T23:26:47.840Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-25T23:26:48.783Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-25T23:26:49.742Z] Copying: 833/1024 [MB] (11 MBps) [2024-11-25T23:26:50.686Z] Copying: 845/1024 [MB] (11 MBps) [2024-11-25T23:26:52.074Z] Copying: 856/1024 [MB] (11 MBps) [2024-11-25T23:26:53.018Z] Copying: 868/1024 [MB] (11 MBps) [2024-11-25T23:26:53.959Z] Copying: 879/1024 [MB] (11 MBps) [2024-11-25T23:26:54.901Z] Copying: 890/1024 [MB] (11 MBps) [2024-11-25T23:26:55.844Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-25T23:26:56.787Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-25T23:26:57.731Z] Copying: 923/1024 [MB] (11 MBps) [2024-11-25T23:26:58.677Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-25T23:27:00.065Z] Copying: 944/1024 [MB] (10 MBps) [2024-11-25T23:27:01.008Z] Copying: 954/1024 [MB] (10 MBps) [2024-11-25T23:27:01.952Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-25T23:27:02.895Z] Copying: 976/1024 [MB] (11 MBps) [2024-11-25T23:27:03.838Z] Copying: 987/1024 [MB] (11 MBps) [2024-11-25T23:27:04.779Z] Copying: 998/1024 [MB] (11 MBps) [2024-11-25T23:27:05.721Z] Copying: 1012/1024 [MB] (13 MBps) [2024-11-25T23:27:06.664Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-25T23:27:06.664Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-25 23:27:06.496584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.496958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.295 [2024-11-25 23:27:06.496999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.295 [2024-11-25 23:27:06.497010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.295 [2024-11-25 23:27:06.500182] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.295 [2024-11-25 23:27:06.505636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.505683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.295 [2024-11-25 23:27:06.505697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.401 ms 00:25:34.295 [2024-11-25 23:27:06.505707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.295 [2024-11-25 23:27:06.520996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.521052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.295 [2024-11-25 23:27:06.521078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.279 ms 00:25:34.295 [2024-11-25 23:27:06.521096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.295 [2024-11-25 23:27:06.546302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.546346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.295 [2024-11-25 23:27:06.546358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.186 ms 00:25:34.295 [2024-11-25 23:27:06.546367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.295 [2024-11-25 23:27:06.552537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.552576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.295 [2024-11-25 23:27:06.552589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.132 ms 00:25:34.295 [2024-11-25 23:27:06.552598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.295 [2024-11-25 23:27:06.580702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.580745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.295 [2024-11-25 23:27:06.580758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.050 ms 00:25:34.295 [2024-11-25 23:27:06.580768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.295 [2024-11-25 23:27:06.598732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.295 [2024-11-25 23:27:06.598777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.295 [2024-11-25 23:27:06.598790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.917 ms 00:25:34.295 [2024-11-25 23:27:06.598799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.556 [2024-11-25 23:27:06.884100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.556 [2024-11-25 23:27:06.884165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.556 [2024-11-25 23:27:06.884180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 285.248 ms 00:25:34.556 [2024-11-25 23:27:06.884191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.556 [2024-11-25 23:27:06.910829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.556 [2024-11-25 23:27:06.910871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:34.556 [2024-11-25 23:27:06.910884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.619 ms 00:25:34.556 [2024-11-25 23:27:06.910893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.819 [2024-11-25 23:27:06.936532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.819 [2024-11-25 23:27:06.936572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:34.819 [2024-11-25 23:27:06.936584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.592 ms 00:25:34.819 [2024-11-25 23:27:06.936592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.819 [2024-11-25 23:27:06.961514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.819 [2024-11-25 23:27:06.961556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:34.819 [2024-11-25 23:27:06.961568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.876 ms 00:25:34.819 [2024-11-25 23:27:06.961576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.819 [2024-11-25 23:27:06.985991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.819 [2024-11-25 23:27:06.986032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:34.819 [2024-11-25 23:27:06.986043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.325 ms 00:25:34.819 [2024-11-25 23:27:06.986052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.819 [2024-11-25 23:27:06.986114] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:34.819 [2024-11-25 23:27:06.986132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99840 / 261120 wr_cnt: 1 state: open 00:25:34.819 [2024-11-25 23:27:06.986144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:34.819 [2024-11-25 23:27:06.986698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:34.820 [2024-11-25 23:27:06.986988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:34.820 [2024-11-25 23:27:06.986999] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a5516937-5c6e-4854-a6f9-4ae2e284b0cc 00:25:34.820 [2024-11-25 23:27:06.987008] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99840 00:25:34.820 [2024-11-25 23:27:06.987016] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100800 00:25:34.820 [2024-11-25 23:27:06.987024] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99840 00:25:34.820 [2024-11-25 23:27:06.987033] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:25:34.820 [2024-11-25 23:27:06.987053] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:34.820 [2024-11-25 23:27:06.987076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:34.820 [2024-11-25 23:27:06.987085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:34.820 [2024-11-25 23:27:06.987092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:34.820 [2024-11-25 23:27:06.987100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:34.820 [2024-11-25 23:27:06.987108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.820 [2024-11-25 23:27:06.987119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:34.820 [2024-11-25 23:27:06.987129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:25:34.820 [2024-11-25 23:27:06.987138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.001928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.820 [2024-11-25 23:27:07.001964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:34.820 [2024-11-25 23:27:07.001982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.770 ms 00:25:34.820 [2024-11-25 23:27:07.001992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.002437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.820 [2024-11-25 23:27:07.002460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:34.820 [2024-11-25 23:27:07.002471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:25:34.820 [2024-11-25 23:27:07.002480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.041954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.820 [2024-11-25 23:27:07.042006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.820 [2024-11-25 23:27:07.042018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.820 [2024-11-25 23:27:07.042029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.042113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.820 [2024-11-25 23:27:07.042125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.820 [2024-11-25 23:27:07.042134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.820 [2024-11-25 23:27:07.042143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.042232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.820 [2024-11-25 23:27:07.042246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.820 [2024-11-25 23:27:07.042259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.820 [2024-11-25 23:27:07.042268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.042285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.820 [2024-11-25 23:27:07.042295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.820 [2024-11-25 23:27:07.042304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.820 [2024-11-25 23:27:07.042311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.820 [2024-11-25 23:27:07.134801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.820 [2024-11-25 23:27:07.134877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.820 [2024-11-25 23:27:07.134891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.820 [2024-11-25 23:27:07.134900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.209401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.081 [2024-11-25 23:27:07.209414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.209424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.209510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.081 [2024-11-25 23:27:07.209519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.209534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.209617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.081 [2024-11-25 23:27:07.209627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.209636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.209761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.081 [2024-11-25 23:27:07.209773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.209782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.209835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.081 [2024-11-25 23:27:07.209844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.209854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.209918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.081 [2024-11-25 23:27:07.209929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.209937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.209999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.081 [2024-11-25 23:27:07.210012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.081 [2024-11-25 23:27:07.210021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.081 [2024-11-25 23:27:07.210033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.081 [2024-11-25 23:27:07.210225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 716.446 ms, result 0 00:25:35.652 00:25:35.652 00:25:35.652 23:27:07 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:35.652 [2024-11-25 23:27:07.904865] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:25:35.652 [2024-11-25 23:27:07.904964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80246 ] 00:25:35.913 [2024-11-25 23:27:08.055160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.913 [2024-11-25 23:27:08.146080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.174 [2024-11-25 23:27:08.373094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.174 [2024-11-25 23:27:08.373147] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.174 [2024-11-25 23:27:08.528940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.174 [2024-11-25 23:27:08.528980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.174 [2024-11-25 23:27:08.528992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.174 [2024-11-25 23:27:08.528998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.174 [2024-11-25 23:27:08.529035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.174 [2024-11-25 23:27:08.529044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.174 [2024-11-25 23:27:08.529052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:36.174 [2024-11-25 23:27:08.529069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.174 [2024-11-25 23:27:08.529083] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.174 [2024-11-25 23:27:08.529626] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.174 [2024-11-25 23:27:08.529643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.174 [2024-11-25 23:27:08.529650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.174 [2024-11-25 23:27:08.529657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:25:36.174 [2024-11-25 23:27:08.529663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.174 [2024-11-25 23:27:08.530896] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.436 [2024-11-25 23:27:08.541378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.541407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.436 [2024-11-25 23:27:08.541417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.483 ms 00:25:36.436 [2024-11-25 23:27:08.541424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.541471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.541478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.436 [2024-11-25 23:27:08.541485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:36.436 [2024-11-25 23:27:08.541491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.547691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.547716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.436 [2024-11-25 23:27:08.547723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:25:36.436 [2024-11-25 23:27:08.547732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.547790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.547797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.436 [2024-11-25 23:27:08.547804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:36.436 [2024-11-25 23:27:08.547810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.547846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.547854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.436 [2024-11-25 23:27:08.547860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.436 [2024-11-25 23:27:08.547867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.547885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.436 [2024-11-25 23:27:08.550852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.550876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.436 [2024-11-25 23:27:08.550886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:25:36.436 [2024-11-25 23:27:08.550892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.550919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.550926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.436 [2024-11-25 23:27:08.550932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:36.436 [2024-11-25 23:27:08.550938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.550953] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.436 [2024-11-25 23:27:08.550969] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:36.436 [2024-11-25 23:27:08.550999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.436 [2024-11-25 23:27:08.551013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:36.436 [2024-11-25 23:27:08.551105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.436 [2024-11-25 23:27:08.551115] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.436 [2024-11-25 23:27:08.551126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.436 [2024-11-25 23:27:08.551134] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551141] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551148] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.436 [2024-11-25 23:27:08.551154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.436 [2024-11-25 23:27:08.551160] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.436 [2024-11-25 23:27:08.551168] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.436 [2024-11-25 23:27:08.551174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.551180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.436 [2024-11-25 23:27:08.551186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:25:36.436 [2024-11-25 23:27:08.551192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.551254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.436 [2024-11-25 23:27:08.551261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.436 [2024-11-25 23:27:08.551268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:36.436 [2024-11-25 23:27:08.551274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.436 [2024-11-25 23:27:08.551352] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.436 [2024-11-25 23:27:08.551360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.436 [2024-11-25 23:27:08.551367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.436 [2024-11-25 23:27:08.551384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.436 [2024-11-25 23:27:08.551400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.436 [2024-11-25 23:27:08.551411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.436 [2024-11-25 23:27:08.551418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.436 [2024-11-25 23:27:08.551423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.436 [2024-11-25 23:27:08.551433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.436 [2024-11-25 23:27:08.551438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.436 [2024-11-25 23:27:08.551444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.436 [2024-11-25 23:27:08.551454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.436 [2024-11-25 23:27:08.551470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.436 [2024-11-25 23:27:08.551485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.436 [2024-11-25 23:27:08.551502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.436 [2024-11-25 23:27:08.551519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.436 [2024-11-25 23:27:08.551534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.436 [2024-11-25 23:27:08.551544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.436 [2024-11-25 23:27:08.551549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.436 [2024-11-25 23:27:08.551554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.436 [2024-11-25 23:27:08.551560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.436 [2024-11-25 23:27:08.551565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.436 [2024-11-25 23:27:08.551570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.436 [2024-11-25 23:27:08.551580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.436 [2024-11-25 23:27:08.551585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.436 [2024-11-25 23:27:08.551598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.436 [2024-11-25 23:27:08.551604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.436 [2024-11-25 23:27:08.551616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.436 [2024-11-25 23:27:08.551623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.436 [2024-11-25 23:27:08.551629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.436 [2024-11-25 23:27:08.551634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.436 [2024-11-25 23:27:08.551640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.436 [2024-11-25 23:27:08.551645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.436 [2024-11-25 23:27:08.551652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.436 [2024-11-25 23:27:08.551659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.436 [2024-11-25 23:27:08.551668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.436 [2024-11-25 23:27:08.551674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.436 [2024-11-25 23:27:08.551679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.437 [2024-11-25 23:27:08.551684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.437 [2024-11-25 23:27:08.551690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.437 [2024-11-25 23:27:08.551695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.437 [2024-11-25 23:27:08.551700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.437 [2024-11-25 23:27:08.551705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.437 [2024-11-25 23:27:08.551711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.437 [2024-11-25 23:27:08.551717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.437 [2024-11-25 23:27:08.551722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.437 [2024-11-25 23:27:08.551727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.437 [2024-11-25 23:27:08.551732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.437 [2024-11-25 23:27:08.551738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.437 [2024-11-25 23:27:08.551743] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.437 [2024-11-25 23:27:08.551889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.437 [2024-11-25 23:27:08.551895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.437 [2024-11-25 23:27:08.551901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.437 [2024-11-25 23:27:08.551907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.437 [2024-11-25 23:27:08.551912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.437 [2024-11-25 23:27:08.551921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.551927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.437 [2024-11-25 23:27:08.551933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:25:36.437 [2024-11-25 23:27:08.551939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.576202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.576230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.437 [2024-11-25 23:27:08.576239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.223 ms 00:25:36.437 [2024-11-25 23:27:08.576247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.576312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.576319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.437 [2024-11-25 23:27:08.576325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:36.437 [2024-11-25 23:27:08.576331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.626251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.626283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.437 [2024-11-25 23:27:08.626292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.878 ms 00:25:36.437 [2024-11-25 23:27:08.626299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.626331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.626339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.437 [2024-11-25 23:27:08.626349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:36.437 [2024-11-25 23:27:08.626355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.626778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.626800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.437 [2024-11-25 23:27:08.626809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:25:36.437 [2024-11-25 23:27:08.626815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.626930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.626938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.437 [2024-11-25 23:27:08.626948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:36.437 [2024-11-25 23:27:08.626954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.638785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.638811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.437 [2024-11-25 23:27:08.638821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.814 ms 00:25:36.437 [2024-11-25 23:27:08.638827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.649417] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:36.437 [2024-11-25 23:27:08.649446] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.437 [2024-11-25 23:27:08.649455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.649462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.437 [2024-11-25 23:27:08.649469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.551 ms 00:25:36.437 [2024-11-25 23:27:08.649475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.668408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.668436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.437 [2024-11-25 23:27:08.668446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.902 ms 00:25:36.437 [2024-11-25 23:27:08.668453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.677694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.677720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.437 [2024-11-25 23:27:08.677728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.210 ms 00:25:36.437 [2024-11-25 23:27:08.677733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.686811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.686836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.437 [2024-11-25 23:27:08.686844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.051 ms 00:25:36.437 [2024-11-25 23:27:08.686850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.687332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.687354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.437 [2024-11-25 23:27:08.687364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:25:36.437 [2024-11-25 23:27:08.687370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.735833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.735870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.437 [2024-11-25 23:27:08.735885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.448 ms 00:25:36.437 [2024-11-25 23:27:08.735892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.744491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.437 [2024-11-25 23:27:08.746841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.746864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.437 [2024-11-25 23:27:08.746873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.914 ms 00:25:36.437 [2024-11-25 23:27:08.746881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.746954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.746962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.437 [2024-11-25 23:27:08.746970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.437 [2024-11-25 23:27:08.746978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.748243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.748268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.437 [2024-11-25 23:27:08.748276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:25:36.437 [2024-11-25 23:27:08.748282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.748303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.748310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.437 [2024-11-25 23:27:08.748316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:36.437 [2024-11-25 23:27:08.748323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.748355] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.437 [2024-11-25 23:27:08.748363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.748370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.437 [2024-11-25 23:27:08.748376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.437 [2024-11-25 23:27:08.748382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.767299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.767326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.437 [2024-11-25 23:27:08.767336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:25:36.437 [2024-11-25 23:27:08.767345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.767402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.437 [2024-11-25 23:27:08.767410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.437 [2024-11-25 23:27:08.767417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:36.437 [2024-11-25 23:27:08.767423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.437 [2024-11-25 23:27:08.768365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 239.039 ms, result 0 00:25:37.825  [2024-11-25T23:27:11.137Z] Copying: 9188/1048576 [kB] (9188 kBps) [2024-11-25T23:27:12.082Z] Copying: 20/1024 [MB] (11 MBps) [2024-11-25T23:27:13.160Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-25T23:27:14.104Z] Copying: 42/1024 [MB] (10 MBps) [2024-11-25T23:27:15.048Z] Copying: 53/1024 [MB] (11 MBps) [2024-11-25T23:27:15.993Z] Copying: 65/1024 [MB] (11 MBps) [2024-11-25T23:27:16.936Z] Copying: 76/1024 [MB] (10 MBps) [2024-11-25T23:27:18.326Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-25T23:27:19.270Z] Copying: 98/1024 [MB] (10 MBps) [2024-11-25T23:27:20.216Z] Copying: 110/1024 [MB] (11 MBps) [2024-11-25T23:27:21.161Z] Copying: 121/1024 [MB] (11 MBps) [2024-11-25T23:27:22.107Z] Copying: 133/1024 [MB] (11 MBps) [2024-11-25T23:27:23.051Z] Copying: 145/1024 [MB] (11 MBps) [2024-11-25T23:27:23.994Z] Copying: 156/1024 [MB] (11 MBps) [2024-11-25T23:27:24.937Z] Copying: 168/1024 [MB] (11 MBps) [2024-11-25T23:27:26.326Z] Copying: 179/1024 [MB] (11 MBps) [2024-11-25T23:27:27.269Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-25T23:27:28.210Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-25T23:27:29.153Z] Copying: 212/1024 [MB] (11 MBps) [2024-11-25T23:27:30.093Z] Copying: 223/1024 [MB] (10 MBps) [2024-11-25T23:27:31.039Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-25T23:27:31.984Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-25T23:27:32.928Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-25T23:27:34.314Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-25T23:27:35.258Z] Copying: 280/1024 [MB] (11 MBps) [2024-11-25T23:27:36.202Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-25T23:27:37.145Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-25T23:27:38.088Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-25T23:27:39.032Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-25T23:27:39.974Z] Copying: 338/1024 [MB] (11 MBps) [2024-11-25T23:27:41.359Z] Copying: 349/1024 [MB] (11 MBps) [2024-11-25T23:27:41.932Z] Copying: 361/1024 [MB] (11 MBps) [2024-11-25T23:27:43.321Z] Copying: 372/1024 [MB] (11 MBps) [2024-11-25T23:27:43.945Z] Copying: 384/1024 [MB] (11 MBps) [2024-11-25T23:27:45.340Z] Copying: 396/1024 [MB] (11 MBps) [2024-11-25T23:27:46.285Z] Copying: 408/1024 [MB] (11 MBps) [2024-11-25T23:27:47.229Z] Copying: 419/1024 [MB] (11 MBps) [2024-11-25T23:27:48.173Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-25T23:27:49.114Z] Copying: 443/1024 [MB] (11 MBps) [2024-11-25T23:27:50.055Z] Copying: 454/1024 [MB] (11 MBps) [2024-11-25T23:27:50.996Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-25T23:27:51.938Z] Copying: 478/1024 [MB] (11 MBps) [2024-11-25T23:27:53.324Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-25T23:27:54.269Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-25T23:27:55.214Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-25T23:27:56.157Z] Copying: 525/1024 [MB] (12 MBps) [2024-11-25T23:27:57.100Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-25T23:27:58.045Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-25T23:27:58.989Z] Copying: 560/1024 [MB] (11 MBps) [2024-11-25T23:27:59.930Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-25T23:28:01.318Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-25T23:28:02.263Z] Copying: 594/1024 [MB] (11 MBps) [2024-11-25T23:28:03.207Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-25T23:28:04.154Z] Copying: 617/1024 [MB] (11 MBps) [2024-11-25T23:28:05.100Z] Copying: 629/1024 [MB] (11 MBps) [2024-11-25T23:28:06.045Z] Copying: 640/1024 [MB] (11 MBps) [2024-11-25T23:28:06.990Z] Copying: 651/1024 [MB] (11 MBps) [2024-11-25T23:28:07.933Z] Copying: 663/1024 [MB] (11 MBps) [2024-11-25T23:28:09.319Z] Copying: 675/1024 [MB] (11 MBps) [2024-11-25T23:28:10.260Z] Copying: 686/1024 [MB] (11 MBps) [2024-11-25T23:28:11.205Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-25T23:28:12.168Z] Copying: 710/1024 [MB] (11 MBps) [2024-11-25T23:28:13.110Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-25T23:28:14.054Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-25T23:28:14.999Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-25T23:28:15.943Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-25T23:28:17.331Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-25T23:28:18.274Z] Copying: 779/1024 [MB] (11 MBps) [2024-11-25T23:28:19.218Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-25T23:28:20.161Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-25T23:28:21.106Z] Copying: 815/1024 [MB] (12 MBps) [2024-11-25T23:28:22.052Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-25T23:28:22.998Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-25T23:28:23.943Z] Copying: 849/1024 [MB] (10 MBps) [2024-11-25T23:28:25.332Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-25T23:28:26.278Z] Copying: 873/1024 [MB] (12 MBps) [2024-11-25T23:28:27.223Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-25T23:28:28.168Z] Copying: 896/1024 [MB] (11 MBps) [2024-11-25T23:28:29.111Z] Copying: 908/1024 [MB] (11 MBps) [2024-11-25T23:28:30.055Z] Copying: 919/1024 [MB] (11 MBps) [2024-11-25T23:28:31.000Z] Copying: 931/1024 [MB] (12 MBps) [2024-11-25T23:28:31.945Z] Copying: 944/1024 [MB] (12 MBps) [2024-11-25T23:28:33.333Z] Copying: 955/1024 [MB] (11 MBps) [2024-11-25T23:28:34.276Z] Copying: 966/1024 [MB] (10 MBps) [2024-11-25T23:28:35.220Z] Copying: 977/1024 [MB] (10 MBps) [2024-11-25T23:28:36.164Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-25T23:28:37.110Z] Copying: 1000/1024 [MB] (12 MBps) [2024-11-25T23:28:38.052Z] Copying: 1011/1024 [MB] (11 MBps) [2024-11-25T23:28:38.052Z] Copying: 1022/1024 [MB] (11 MBps) [2024-11-25T23:28:38.313Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-25 23:28:38.249393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.249473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:05.944 [2024-11-25 23:28:38.249491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:05.944 [2024-11-25 23:28:38.249506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.944 [2024-11-25 23:28:38.249534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:05.944 [2024-11-25 23:28:38.253844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.253883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:05.944 [2024-11-25 23:28:38.253895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.291 ms 00:27:05.944 [2024-11-25 23:28:38.253911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.944 [2024-11-25 23:28:38.254709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.254737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:05.944 [2024-11-25 23:28:38.254749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:27:05.944 [2024-11-25 23:28:38.254764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.944 [2024-11-25 23:28:38.260518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.260549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:05.944 [2024-11-25 23:28:38.260557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.737 ms 00:27:05.944 [2024-11-25 23:28:38.260564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.944 [2024-11-25 23:28:38.265291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.265318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:05.944 [2024-11-25 23:28:38.265326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.697 ms 00:27:05.944 [2024-11-25 23:28:38.265333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.944 [2024-11-25 23:28:38.285047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.285084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:05.944 [2024-11-25 23:28:38.285093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.677 ms 00:27:05.944 [2024-11-25 23:28:38.285100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.944 [2024-11-25 23:28:38.297349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.944 [2024-11-25 23:28:38.297380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:05.944 [2024-11-25 23:28:38.297390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.221 ms 00:27:05.945 [2024-11-25 23:28:38.297397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.518 [2024-11-25 23:28:38.622382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.518 [2024-11-25 23:28:38.622411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:06.518 [2024-11-25 23:28:38.622421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 324.952 ms 00:27:06.518 [2024-11-25 23:28:38.622427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.518 [2024-11-25 23:28:38.641124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.518 [2024-11-25 23:28:38.641150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:06.518 [2024-11-25 23:28:38.641159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.685 ms 00:27:06.518 [2024-11-25 23:28:38.641165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.518 [2024-11-25 23:28:38.659033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.518 [2024-11-25 23:28:38.659063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:06.519 [2024-11-25 23:28:38.659072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.843 ms 00:27:06.519 [2024-11-25 23:28:38.659079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.519 [2024-11-25 23:28:38.676959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.519 [2024-11-25 23:28:38.676984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:06.519 [2024-11-25 23:28:38.676992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.855 ms 00:27:06.519 [2024-11-25 23:28:38.676998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.519 [2024-11-25 23:28:38.694023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.519 [2024-11-25 23:28:38.694047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:06.519 [2024-11-25 23:28:38.694060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.981 ms 00:27:06.519 [2024-11-25 23:28:38.694066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.519 [2024-11-25 23:28:38.694092] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:06.519 [2024-11-25 23:28:38.694104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:27:06.519 [2024-11-25 23:28:38.694112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:06.519 [2024-11-25 23:28:38.694563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:06.520 [2024-11-25 23:28:38.694690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:06.520 [2024-11-25 23:28:38.694696] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a5516937-5c6e-4854-a6f9-4ae2e284b0cc 00:27:06.520 [2024-11-25 23:28:38.694702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:27:06.520 [2024-11-25 23:28:38.694708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32960 00:27:06.520 [2024-11-25 23:28:38.694714] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 32000 00:27:06.520 [2024-11-25 23:28:38.694720] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0300 00:27:06.520 [2024-11-25 23:28:38.694726] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:06.520 [2024-11-25 23:28:38.694740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:06.520 [2024-11-25 23:28:38.694747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:06.520 [2024-11-25 23:28:38.694752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:06.520 [2024-11-25 23:28:38.694757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:06.520 [2024-11-25 23:28:38.694762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.520 [2024-11-25 23:28:38.694768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:06.520 [2024-11-25 23:28:38.694775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:27:06.520 [2024-11-25 23:28:38.694781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.704784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.520 [2024-11-25 23:28:38.704808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:06.520 [2024-11-25 23:28:38.704816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.992 ms 00:27:06.520 [2024-11-25 23:28:38.704826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.705130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.520 [2024-11-25 23:28:38.705139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:06.520 [2024-11-25 23:28:38.705146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:27:06.520 [2024-11-25 23:28:38.705152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.732548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.732577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:06.520 [2024-11-25 23:28:38.732585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.732591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.732631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.732638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:06.520 [2024-11-25 23:28:38.732645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.732651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.732692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.732700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:06.520 [2024-11-25 23:28:38.732710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.732717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.732729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.732735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:06.520 [2024-11-25 23:28:38.732742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.732748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.795240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.795278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.520 [2024-11-25 23:28:38.795286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.795293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.845729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.845765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.520 [2024-11-25 23:28:38.845774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.845781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.845846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.845854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.520 [2024-11-25 23:28:38.845861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.845872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.845906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.845914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.520 [2024-11-25 23:28:38.845921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.845928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.846002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.846010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.520 [2024-11-25 23:28:38.846017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.846023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.846049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.846066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:06.520 [2024-11-25 23:28:38.846073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.846080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.846116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.846123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.520 [2024-11-25 23:28:38.846129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.846135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.846179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.520 [2024-11-25 23:28:38.846187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.520 [2024-11-25 23:28:38.846193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.520 [2024-11-25 23:28:38.846199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.520 [2024-11-25 23:28:38.846310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 596.892 ms, result 0 00:27:07.092 00:27:07.092 00:27:07.092 23:28:39 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:09.705 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77436 00:27:09.705 23:28:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77436 ']' 00:27:09.705 23:28:41 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77436 00:27:09.705 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77436) - No such process 00:27:09.705 Process with pid 77436 is not found 00:27:09.705 23:28:41 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77436 is not found' 00:27:09.705 Remove shared memory files 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:09.705 23:28:41 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:09.706 23:28:41 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:09.706 23:28:41 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:09.706 23:28:41 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:09.706 23:28:41 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:09.706 23:28:41 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:09.706 00:27:09.706 real 6m6.153s 00:27:09.706 user 5m55.659s 00:27:09.706 sys 0m10.372s 00:27:09.706 23:28:41 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.706 23:28:41 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:09.706 ************************************ 00:27:09.706 END TEST ftl_restore 00:27:09.706 ************************************ 00:27:09.706 23:28:41 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:09.706 23:28:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:09.706 23:28:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.706 23:28:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:09.706 ************************************ 00:27:09.706 START TEST ftl_dirty_shutdown 00:27:09.706 ************************************ 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:09.706 * Looking for test storage... 00:27:09.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.706 --rc genhtml_branch_coverage=1 00:27:09.706 --rc genhtml_function_coverage=1 00:27:09.706 --rc genhtml_legend=1 00:27:09.706 --rc geninfo_all_blocks=1 00:27:09.706 --rc geninfo_unexecuted_blocks=1 00:27:09.706 00:27:09.706 ' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.706 --rc genhtml_branch_coverage=1 00:27:09.706 --rc genhtml_function_coverage=1 00:27:09.706 --rc genhtml_legend=1 00:27:09.706 --rc geninfo_all_blocks=1 00:27:09.706 --rc geninfo_unexecuted_blocks=1 00:27:09.706 00:27:09.706 ' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.706 --rc genhtml_branch_coverage=1 00:27:09.706 --rc genhtml_function_coverage=1 00:27:09.706 --rc genhtml_legend=1 00:27:09.706 --rc geninfo_all_blocks=1 00:27:09.706 --rc geninfo_unexecuted_blocks=1 00:27:09.706 00:27:09.706 ' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.706 --rc genhtml_branch_coverage=1 00:27:09.706 --rc genhtml_function_coverage=1 00:27:09.706 --rc genhtml_legend=1 00:27:09.706 --rc geninfo_all_blocks=1 00:27:09.706 --rc geninfo_unexecuted_blocks=1 00:27:09.706 00:27:09.706 ' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.706 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81271 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81271 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81271 ']' 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.707 23:28:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:09.707 [2024-11-25 23:28:42.042868] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:27:09.707 [2024-11-25 23:28:42.042979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81271 ] 00:27:09.969 [2024-11-25 23:28:42.186283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.969 [2024-11-25 23:28:42.274214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:10.542 23:28:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:10.802 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:10.803 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:11.064 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:11.064 { 00:27:11.064 "name": "nvme0n1", 00:27:11.064 "aliases": [ 00:27:11.064 "c208fdfb-6d04-4f29-bbdc-e54a6c4a45d3" 00:27:11.064 ], 00:27:11.064 "product_name": "NVMe disk", 00:27:11.064 "block_size": 4096, 00:27:11.064 "num_blocks": 1310720, 00:27:11.064 "uuid": "c208fdfb-6d04-4f29-bbdc-e54a6c4a45d3", 00:27:11.064 "numa_id": -1, 00:27:11.064 "assigned_rate_limits": { 00:27:11.064 "rw_ios_per_sec": 0, 00:27:11.064 "rw_mbytes_per_sec": 0, 00:27:11.064 "r_mbytes_per_sec": 0, 00:27:11.064 "w_mbytes_per_sec": 0 00:27:11.064 }, 00:27:11.064 "claimed": true, 00:27:11.064 "claim_type": "read_many_write_one", 00:27:11.064 "zoned": false, 00:27:11.064 "supported_io_types": { 00:27:11.064 "read": true, 00:27:11.064 "write": true, 00:27:11.064 "unmap": true, 00:27:11.064 "flush": true, 00:27:11.064 "reset": true, 00:27:11.064 "nvme_admin": true, 00:27:11.064 "nvme_io": true, 00:27:11.064 "nvme_io_md": false, 00:27:11.064 "write_zeroes": true, 00:27:11.064 "zcopy": false, 00:27:11.064 "get_zone_info": false, 00:27:11.065 "zone_management": false, 00:27:11.065 "zone_append": false, 00:27:11.065 "compare": true, 00:27:11.065 "compare_and_write": false, 00:27:11.065 "abort": true, 00:27:11.065 "seek_hole": false, 00:27:11.065 "seek_data": false, 00:27:11.065 "copy": true, 00:27:11.065 "nvme_iov_md": false 00:27:11.065 }, 00:27:11.065 "driver_specific": { 00:27:11.065 "nvme": [ 00:27:11.065 { 00:27:11.065 "pci_address": "0000:00:11.0", 00:27:11.065 "trid": { 00:27:11.065 "trtype": "PCIe", 00:27:11.065 "traddr": "0000:00:11.0" 00:27:11.065 }, 00:27:11.065 "ctrlr_data": { 00:27:11.065 "cntlid": 0, 00:27:11.065 "vendor_id": "0x1b36", 00:27:11.065 "model_number": "QEMU NVMe Ctrl", 00:27:11.065 "serial_number": "12341", 00:27:11.065 "firmware_revision": "8.0.0", 00:27:11.065 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:11.065 "oacs": { 00:27:11.065 "security": 0, 00:27:11.065 "format": 1, 00:27:11.065 "firmware": 0, 00:27:11.065 "ns_manage": 1 00:27:11.065 }, 00:27:11.065 "multi_ctrlr": false, 00:27:11.065 "ana_reporting": false 00:27:11.065 }, 00:27:11.065 "vs": { 00:27:11.065 "nvme_version": "1.4" 00:27:11.065 }, 00:27:11.065 "ns_data": { 00:27:11.065 "id": 1, 00:27:11.065 "can_share": false 00:27:11.065 } 00:27:11.065 } 00:27:11.065 ], 00:27:11.065 "mp_policy": "active_passive" 00:27:11.065 } 00:27:11.065 } 00:27:11.065 ]' 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:11.065 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:11.326 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a054a0a2-fe61-4999-b23d-95d4e5a6768e 00:27:11.326 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:11.326 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a054a0a2-fe61-4999-b23d-95d4e5a6768e 00:27:11.587 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:11.587 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=75b31cc0-1161-464c-ae22-eebd6b805c1f 00:27:11.587 23:28:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 75b31cc0-1161-464c-ae22-eebd6b805c1f 00:27:11.848 23:28:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:11.849 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:12.110 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:12.110 { 00:27:12.110 "name": "66f4fb24-2c3b-47c9-9afd-9991ed79eeb8", 00:27:12.110 "aliases": [ 00:27:12.110 "lvs/nvme0n1p0" 00:27:12.110 ], 00:27:12.110 "product_name": "Logical Volume", 00:27:12.110 "block_size": 4096, 00:27:12.110 "num_blocks": 26476544, 00:27:12.110 "uuid": "66f4fb24-2c3b-47c9-9afd-9991ed79eeb8", 00:27:12.110 "assigned_rate_limits": { 00:27:12.110 "rw_ios_per_sec": 0, 00:27:12.110 "rw_mbytes_per_sec": 0, 00:27:12.110 "r_mbytes_per_sec": 0, 00:27:12.110 "w_mbytes_per_sec": 0 00:27:12.110 }, 00:27:12.110 "claimed": false, 00:27:12.110 "zoned": false, 00:27:12.110 "supported_io_types": { 00:27:12.110 "read": true, 00:27:12.110 "write": true, 00:27:12.110 "unmap": true, 00:27:12.110 "flush": false, 00:27:12.110 "reset": true, 00:27:12.110 "nvme_admin": false, 00:27:12.110 "nvme_io": false, 00:27:12.111 "nvme_io_md": false, 00:27:12.111 "write_zeroes": true, 00:27:12.111 "zcopy": false, 00:27:12.111 "get_zone_info": false, 00:27:12.111 "zone_management": false, 00:27:12.111 "zone_append": false, 00:27:12.111 "compare": false, 00:27:12.111 "compare_and_write": false, 00:27:12.111 "abort": false, 00:27:12.111 "seek_hole": true, 00:27:12.111 "seek_data": true, 00:27:12.111 "copy": false, 00:27:12.111 "nvme_iov_md": false 00:27:12.111 }, 00:27:12.111 "driver_specific": { 00:27:12.111 "lvol": { 00:27:12.111 "lvol_store_uuid": "75b31cc0-1161-464c-ae22-eebd6b805c1f", 00:27:12.111 "base_bdev": "nvme0n1", 00:27:12.111 "thin_provision": true, 00:27:12.111 "num_allocated_clusters": 0, 00:27:12.111 "snapshot": false, 00:27:12.111 "clone": false, 00:27:12.111 "esnap_clone": false 00:27:12.111 } 00:27:12.111 } 00:27:12.111 } 00:27:12.111 ]' 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:12.111 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:12.372 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:12.634 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:12.634 { 00:27:12.634 "name": "66f4fb24-2c3b-47c9-9afd-9991ed79eeb8", 00:27:12.634 "aliases": [ 00:27:12.634 "lvs/nvme0n1p0" 00:27:12.634 ], 00:27:12.634 "product_name": "Logical Volume", 00:27:12.634 "block_size": 4096, 00:27:12.634 "num_blocks": 26476544, 00:27:12.634 "uuid": "66f4fb24-2c3b-47c9-9afd-9991ed79eeb8", 00:27:12.634 "assigned_rate_limits": { 00:27:12.634 "rw_ios_per_sec": 0, 00:27:12.634 "rw_mbytes_per_sec": 0, 00:27:12.634 "r_mbytes_per_sec": 0, 00:27:12.634 "w_mbytes_per_sec": 0 00:27:12.634 }, 00:27:12.634 "claimed": false, 00:27:12.634 "zoned": false, 00:27:12.634 "supported_io_types": { 00:27:12.634 "read": true, 00:27:12.634 "write": true, 00:27:12.634 "unmap": true, 00:27:12.634 "flush": false, 00:27:12.634 "reset": true, 00:27:12.634 "nvme_admin": false, 00:27:12.634 "nvme_io": false, 00:27:12.634 "nvme_io_md": false, 00:27:12.634 "write_zeroes": true, 00:27:12.634 "zcopy": false, 00:27:12.634 "get_zone_info": false, 00:27:12.634 "zone_management": false, 00:27:12.634 "zone_append": false, 00:27:12.634 "compare": false, 00:27:12.634 "compare_and_write": false, 00:27:12.634 "abort": false, 00:27:12.635 "seek_hole": true, 00:27:12.635 "seek_data": true, 00:27:12.635 "copy": false, 00:27:12.635 "nvme_iov_md": false 00:27:12.635 }, 00:27:12.635 "driver_specific": { 00:27:12.635 "lvol": { 00:27:12.635 "lvol_store_uuid": "75b31cc0-1161-464c-ae22-eebd6b805c1f", 00:27:12.635 "base_bdev": "nvme0n1", 00:27:12.635 "thin_provision": true, 00:27:12.635 "num_allocated_clusters": 0, 00:27:12.635 "snapshot": false, 00:27:12.635 "clone": false, 00:27:12.635 "esnap_clone": false 00:27:12.635 } 00:27:12.635 } 00:27:12.635 } 00:27:12.635 ]' 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:12.635 23:28:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:12.896 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:13.159 { 00:27:13.159 "name": "66f4fb24-2c3b-47c9-9afd-9991ed79eeb8", 00:27:13.159 "aliases": [ 00:27:13.159 "lvs/nvme0n1p0" 00:27:13.159 ], 00:27:13.159 "product_name": "Logical Volume", 00:27:13.159 "block_size": 4096, 00:27:13.159 "num_blocks": 26476544, 00:27:13.159 "uuid": "66f4fb24-2c3b-47c9-9afd-9991ed79eeb8", 00:27:13.159 "assigned_rate_limits": { 00:27:13.159 "rw_ios_per_sec": 0, 00:27:13.159 "rw_mbytes_per_sec": 0, 00:27:13.159 "r_mbytes_per_sec": 0, 00:27:13.159 "w_mbytes_per_sec": 0 00:27:13.159 }, 00:27:13.159 "claimed": false, 00:27:13.159 "zoned": false, 00:27:13.159 "supported_io_types": { 00:27:13.159 "read": true, 00:27:13.159 "write": true, 00:27:13.159 "unmap": true, 00:27:13.159 "flush": false, 00:27:13.159 "reset": true, 00:27:13.159 "nvme_admin": false, 00:27:13.159 "nvme_io": false, 00:27:13.159 "nvme_io_md": false, 00:27:13.159 "write_zeroes": true, 00:27:13.159 "zcopy": false, 00:27:13.159 "get_zone_info": false, 00:27:13.159 "zone_management": false, 00:27:13.159 "zone_append": false, 00:27:13.159 "compare": false, 00:27:13.159 "compare_and_write": false, 00:27:13.159 "abort": false, 00:27:13.159 "seek_hole": true, 00:27:13.159 "seek_data": true, 00:27:13.159 "copy": false, 00:27:13.159 "nvme_iov_md": false 00:27:13.159 }, 00:27:13.159 "driver_specific": { 00:27:13.159 "lvol": { 00:27:13.159 "lvol_store_uuid": "75b31cc0-1161-464c-ae22-eebd6b805c1f", 00:27:13.159 "base_bdev": "nvme0n1", 00:27:13.159 "thin_provision": true, 00:27:13.159 "num_allocated_clusters": 0, 00:27:13.159 "snapshot": false, 00:27:13.159 "clone": false, 00:27:13.159 "esnap_clone": false 00:27:13.159 } 00:27:13.159 } 00:27:13.159 } 00:27:13.159 ]' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 --l2p_dram_limit 10' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:13.159 23:28:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 66f4fb24-2c3b-47c9-9afd-9991ed79eeb8 --l2p_dram_limit 10 -c nvc0n1p0 00:27:13.159 [2024-11-25 23:28:45.506306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.159 [2024-11-25 23:28:45.506347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:13.159 [2024-11-25 23:28:45.506361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:13.159 [2024-11-25 23:28:45.506369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.159 [2024-11-25 23:28:45.506412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.159 [2024-11-25 23:28:45.506420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:13.159 [2024-11-25 23:28:45.506428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:13.159 [2024-11-25 23:28:45.506434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.159 [2024-11-25 23:28:45.506454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:13.159 [2024-11-25 23:28:45.507018] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:13.159 [2024-11-25 23:28:45.507040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.159 [2024-11-25 23:28:45.507047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:13.159 [2024-11-25 23:28:45.507066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:27:13.159 [2024-11-25 23:28:45.507073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.159 [2024-11-25 23:28:45.507101] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c284eb1b-bd51-42bd-a57e-41d00c4f73df 00:27:13.159 [2024-11-25 23:28:45.508394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.159 [2024-11-25 23:28:45.508423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:13.159 [2024-11-25 23:28:45.508432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:13.159 [2024-11-25 23:28:45.508440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.159 [2024-11-25 23:28:45.515347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.159 [2024-11-25 23:28:45.515379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:13.159 [2024-11-25 23:28:45.515386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.866 ms 00:27:13.159 [2024-11-25 23:28:45.515394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.515498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.160 [2024-11-25 23:28:45.515508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:13.160 [2024-11-25 23:28:45.515516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:13.160 [2024-11-25 23:28:45.515526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.515565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.160 [2024-11-25 23:28:45.515575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:13.160 [2024-11-25 23:28:45.515584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:13.160 [2024-11-25 23:28:45.515592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.515609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:13.160 [2024-11-25 23:28:45.518891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.160 [2024-11-25 23:28:45.518926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:13.160 [2024-11-25 23:28:45.518937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:27:13.160 [2024-11-25 23:28:45.518943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.518972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.160 [2024-11-25 23:28:45.518978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:13.160 [2024-11-25 23:28:45.518986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:13.160 [2024-11-25 23:28:45.518992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.519013] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:13.160 [2024-11-25 23:28:45.519136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:13.160 [2024-11-25 23:28:45.519151] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:13.160 [2024-11-25 23:28:45.519160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:13.160 [2024-11-25 23:28:45.519172] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519187] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:13.160 [2024-11-25 23:28:45.519193] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:13.160 [2024-11-25 23:28:45.519202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:13.160 [2024-11-25 23:28:45.519208] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:13.160 [2024-11-25 23:28:45.519216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.160 [2024-11-25 23:28:45.519228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:13.160 [2024-11-25 23:28:45.519236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:27:13.160 [2024-11-25 23:28:45.519242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.519309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.160 [2024-11-25 23:28:45.519322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:13.160 [2024-11-25 23:28:45.519329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:13.160 [2024-11-25 23:28:45.519336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.160 [2024-11-25 23:28:45.519418] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:13.160 [2024-11-25 23:28:45.519431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:13.160 [2024-11-25 23:28:45.519440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:13.160 [2024-11-25 23:28:45.519461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:13.160 [2024-11-25 23:28:45.519480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:13.160 [2024-11-25 23:28:45.519492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:13.160 [2024-11-25 23:28:45.519497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:13.160 [2024-11-25 23:28:45.519504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:13.160 [2024-11-25 23:28:45.519509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:13.160 [2024-11-25 23:28:45.519515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:13.160 [2024-11-25 23:28:45.519520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:13.160 [2024-11-25 23:28:45.519534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:13.160 [2024-11-25 23:28:45.519558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:13.160 [2024-11-25 23:28:45.519576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:13.160 [2024-11-25 23:28:45.519594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:13.160 [2024-11-25 23:28:45.519611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:13.160 [2024-11-25 23:28:45.519622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:13.160 [2024-11-25 23:28:45.519630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:13.160 [2024-11-25 23:28:45.519642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:13.160 [2024-11-25 23:28:45.519648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:13.160 [2024-11-25 23:28:45.519655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:13.160 [2024-11-25 23:28:45.519661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:13.160 [2024-11-25 23:28:45.519668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:13.160 [2024-11-25 23:28:45.519679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:13.160 [2024-11-25 23:28:45.519692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:13.160 [2024-11-25 23:28:45.519698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.160 [2024-11-25 23:28:45.519703] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:13.160 [2024-11-25 23:28:45.519711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:13.161 [2024-11-25 23:28:45.519717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:13.161 [2024-11-25 23:28:45.519726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:13.161 [2024-11-25 23:28:45.519732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:13.161 [2024-11-25 23:28:45.519740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:13.161 [2024-11-25 23:28:45.519745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:13.161 [2024-11-25 23:28:45.519753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:13.161 [2024-11-25 23:28:45.519758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:13.161 [2024-11-25 23:28:45.519765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:13.161 [2024-11-25 23:28:45.519774] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:13.161 [2024-11-25 23:28:45.519784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:13.161 [2024-11-25 23:28:45.519799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:13.161 [2024-11-25 23:28:45.519805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:13.161 [2024-11-25 23:28:45.519812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:13.161 [2024-11-25 23:28:45.519818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:13.161 [2024-11-25 23:28:45.519825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:13.161 [2024-11-25 23:28:45.519830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:13.161 [2024-11-25 23:28:45.519837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:13.161 [2024-11-25 23:28:45.519842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:13.161 [2024-11-25 23:28:45.519851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:13.161 [2024-11-25 23:28:45.519884] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:13.161 [2024-11-25 23:28:45.519892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:13.161 [2024-11-25 23:28:45.519906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:13.161 [2024-11-25 23:28:45.519912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:13.161 [2024-11-25 23:28:45.519920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:13.161 [2024-11-25 23:28:45.519926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:13.161 [2024-11-25 23:28:45.519933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:13.161 [2024-11-25 23:28:45.519940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:27:13.161 [2024-11-25 23:28:45.519947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.161 [2024-11-25 23:28:45.519988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:13.161 [2024-11-25 23:28:45.520001] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:17.366 [2024-11-25 23:28:49.364269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.364359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:17.366 [2024-11-25 23:28:49.364379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3844.264 ms 00:27:17.366 [2024-11-25 23:28:49.364393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.402120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.402189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:17.366 [2024-11-25 23:28:49.402205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.463 ms 00:27:17.366 [2024-11-25 23:28:49.402217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.402364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.402380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:17.366 [2024-11-25 23:28:49.402392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:17.366 [2024-11-25 23:28:49.402410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.439159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.439211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:17.366 [2024-11-25 23:28:49.439223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.694 ms 00:27:17.366 [2024-11-25 23:28:49.439233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.439269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.439279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:17.366 [2024-11-25 23:28:49.439287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:17.366 [2024-11-25 23:28:49.439304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.439958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.440015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:17.366 [2024-11-25 23:28:49.440026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:27:17.366 [2024-11-25 23:28:49.440036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.440153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.440164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:17.366 [2024-11-25 23:28:49.440177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:17.366 [2024-11-25 23:28:49.440189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.457254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.457296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:17.366 [2024-11-25 23:28:49.457305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.047 ms 00:27:17.366 [2024-11-25 23:28:49.457314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.469406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:17.366 [2024-11-25 23:28:49.473567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.473601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:17.366 [2024-11-25 23:28:49.473613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.179 ms 00:27:17.366 [2024-11-25 23:28:49.473620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.575814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.575856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:17.366 [2024-11-25 23:28:49.575869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.164 ms 00:27:17.366 [2024-11-25 23:28:49.575877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.576032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.366 [2024-11-25 23:28:49.576043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:17.366 [2024-11-25 23:28:49.576065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:17.366 [2024-11-25 23:28:49.576072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.366 [2024-11-25 23:28:49.594727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-25 23:28:49.594755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:17.367 [2024-11-25 23:28:49.594766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.627 ms 00:27:17.367 [2024-11-25 23:28:49.594773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-25 23:28:49.612420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-25 23:28:49.612445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:17.367 [2024-11-25 23:28:49.612456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.613 ms 00:27:17.367 [2024-11-25 23:28:49.612462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-25 23:28:49.612930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-25 23:28:49.612988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:17.367 [2024-11-25 23:28:49.612998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:27:17.367 [2024-11-25 23:28:49.613006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-25 23:28:49.678316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-25 23:28:49.678348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:17.367 [2024-11-25 23:28:49.678360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.275 ms 00:27:17.367 [2024-11-25 23:28:49.678367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-25 23:28:49.698355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-25 23:28:49.698381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:17.367 [2024-11-25 23:28:49.698391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.930 ms 00:27:17.367 [2024-11-25 23:28:49.698398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.367 [2024-11-25 23:28:49.716290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.367 [2024-11-25 23:28:49.716315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:17.367 [2024-11-25 23:28:49.716325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.863 ms 00:27:17.367 [2024-11-25 23:28:49.716331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.627 [2024-11-25 23:28:49.735194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.627 [2024-11-25 23:28:49.735222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:17.627 [2024-11-25 23:28:49.735233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.834 ms 00:27:17.627 [2024-11-25 23:28:49.735240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.627 [2024-11-25 23:28:49.735273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.627 [2024-11-25 23:28:49.735281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:17.627 [2024-11-25 23:28:49.735292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:17.627 [2024-11-25 23:28:49.735298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.627 [2024-11-25 23:28:49.735364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.627 [2024-11-25 23:28:49.735374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:17.627 [2024-11-25 23:28:49.735383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:17.627 [2024-11-25 23:28:49.735391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.627 [2024-11-25 23:28:49.737522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4230.032 ms, result 0 00:27:17.627 { 00:27:17.627 "name": "ftl0", 00:27:17.627 "uuid": "c284eb1b-bd51-42bd-a57e-41d00c4f73df" 00:27:17.627 } 00:27:17.627 23:28:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:17.627 23:28:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:17.627 23:28:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:17.627 23:28:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:17.627 23:28:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:17.889 /dev/nbd0 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:17.889 1+0 records in 00:27:17.889 1+0 records out 00:27:17.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475787 s, 8.6 MB/s 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:17.889 23:28:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:17.889 [2024-11-25 23:28:50.240161] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:27:17.889 [2024-11-25 23:28:50.240284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81413 ] 00:27:18.150 [2024-11-25 23:28:50.402447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.412 [2024-11-25 23:28:50.525004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.798  [2024-11-25T23:28:53.112Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-25T23:28:54.054Z] Copying: 391/1024 [MB] (196 MBps) [2024-11-25T23:28:54.990Z] Copying: 606/1024 [MB] (215 MBps) [2024-11-25T23:28:55.558Z] Copying: 860/1024 [MB] (253 MBps) [2024-11-25T23:28:56.129Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:27:23.760 00:27:23.760 23:28:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:26.306 23:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:26.306 [2024-11-25 23:28:58.175169] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:27:26.306 [2024-11-25 23:28:58.175718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81497 ] 00:27:26.306 [2024-11-25 23:28:58.329752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.306 [2024-11-25 23:28:58.404082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.240  [2024-11-25T23:29:00.983Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-25T23:29:01.922Z] Copying: 63/1024 [MB] (33 MBps) [2024-11-25T23:29:02.854Z] Copying: 93/1024 [MB] (30 MBps) [2024-11-25T23:29:03.786Z] Copying: 129/1024 [MB] (35 MBps) [2024-11-25T23:29:04.717Z] Copying: 164/1024 [MB] (34 MBps) [2024-11-25T23:29:05.649Z] Copying: 199/1024 [MB] (35 MBps) [2024-11-25T23:29:06.582Z] Copying: 234/1024 [MB] (34 MBps) [2024-11-25T23:29:07.957Z] Copying: 269/1024 [MB] (35 MBps) [2024-11-25T23:29:08.890Z] Copying: 304/1024 [MB] (34 MBps) [2024-11-25T23:29:09.825Z] Copying: 336/1024 [MB] (32 MBps) [2024-11-25T23:29:10.774Z] Copying: 367/1024 [MB] (30 MBps) [2024-11-25T23:29:11.746Z] Copying: 402/1024 [MB] (35 MBps) [2024-11-25T23:29:12.682Z] Copying: 437/1024 [MB] (34 MBps) [2024-11-25T23:29:13.616Z] Copying: 471/1024 [MB] (34 MBps) [2024-11-25T23:29:14.993Z] Copying: 507/1024 [MB] (35 MBps) [2024-11-25T23:29:15.926Z] Copying: 541/1024 [MB] (34 MBps) [2024-11-25T23:29:16.861Z] Copying: 576/1024 [MB] (34 MBps) [2024-11-25T23:29:17.794Z] Copying: 608/1024 [MB] (31 MBps) [2024-11-25T23:29:18.728Z] Copying: 644/1024 [MB] (36 MBps) [2024-11-25T23:29:19.661Z] Copying: 679/1024 [MB] (34 MBps) [2024-11-25T23:29:20.596Z] Copying: 712/1024 [MB] (32 MBps) [2024-11-25T23:29:21.969Z] Copying: 741/1024 [MB] (29 MBps) [2024-11-25T23:29:22.901Z] Copying: 772/1024 [MB] (31 MBps) [2024-11-25T23:29:23.833Z] Copying: 804/1024 [MB] (31 MBps) [2024-11-25T23:29:24.767Z] Copying: 835/1024 [MB] (30 MBps) [2024-11-25T23:29:25.700Z] Copying: 866/1024 [MB] (30 MBps) [2024-11-25T23:29:26.634Z] Copying: 900/1024 [MB] (34 MBps) [2024-11-25T23:29:27.577Z] Copying: 936/1024 [MB] (35 MBps) [2024-11-25T23:29:28.967Z] Copying: 953/1024 [MB] (17 MBps) [2024-11-25T23:29:29.907Z] Copying: 986920/1048576 [kB] (10184 kBps) [2024-11-25T23:29:30.854Z] Copying: 974/1024 [MB] (10 MBps) [2024-11-25T23:29:31.789Z] Copying: 991/1024 [MB] (16 MBps) [2024-11-25T23:29:32.048Z] Copying: 1015/1024 [MB] (24 MBps) [2024-11-25T23:29:32.616Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:28:00.247 00:28:00.247 23:29:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:00.247 23:29:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:00.506 23:29:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:00.506 [2024-11-25 23:29:32.801725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.506 [2024-11-25 23:29:32.801861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:00.506 [2024-11-25 23:29:32.801879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:00.506 [2024-11-25 23:29:32.801887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.506 [2024-11-25 23:29:32.801915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:00.506 [2024-11-25 23:29:32.804015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.506 [2024-11-25 23:29:32.804044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:00.506 [2024-11-25 23:29:32.804064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:28:00.506 [2024-11-25 23:29:32.804073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.506 [2024-11-25 23:29:32.805771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.506 [2024-11-25 23:29:32.805800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:00.506 [2024-11-25 23:29:32.805809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:28:00.506 [2024-11-25 23:29:32.805815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.506 [2024-11-25 23:29:32.819513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.506 [2024-11-25 23:29:32.819541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:00.506 [2024-11-25 23:29:32.819551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.681 ms 00:28:00.506 [2024-11-25 23:29:32.819557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.506 [2024-11-25 23:29:32.824410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.506 [2024-11-25 23:29:32.824518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:00.507 [2024-11-25 23:29:32.824535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:28:00.507 [2024-11-25 23:29:32.824541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.507 [2024-11-25 23:29:32.843136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.507 [2024-11-25 23:29:32.843240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:00.507 [2024-11-25 23:29:32.843257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:28:00.507 [2024-11-25 23:29:32.843263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.507 [2024-11-25 23:29:32.855575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.507 [2024-11-25 23:29:32.855604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:00.507 [2024-11-25 23:29:32.855618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.280 ms 00:28:00.507 [2024-11-25 23:29:32.855624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.507 [2024-11-25 23:29:32.855732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.507 [2024-11-25 23:29:32.855740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:00.507 [2024-11-25 23:29:32.855748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:00.507 [2024-11-25 23:29:32.855754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.769 [2024-11-25 23:29:32.873582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.769 [2024-11-25 23:29:32.873608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:00.769 [2024-11-25 23:29:32.873618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.813 ms 00:28:00.769 [2024-11-25 23:29:32.873623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.769 [2024-11-25 23:29:32.891162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.769 [2024-11-25 23:29:32.891187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:00.769 [2024-11-25 23:29:32.891197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:28:00.769 [2024-11-25 23:29:32.891202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.769 [2024-11-25 23:29:32.908189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.769 [2024-11-25 23:29:32.908214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:00.769 [2024-11-25 23:29:32.908223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.948 ms 00:28:00.769 [2024-11-25 23:29:32.908229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.769 [2024-11-25 23:29:32.925129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.769 [2024-11-25 23:29:32.925226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:00.769 [2024-11-25 23:29:32.925242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.841 ms 00:28:00.769 [2024-11-25 23:29:32.925248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.769 [2024-11-25 23:29:32.925274] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:00.769 [2024-11-25 23:29:32.925285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:00.769 [2024-11-25 23:29:32.925295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:00.769 [2024-11-25 23:29:32.925301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:00.770 [2024-11-25 23:29:32.925833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:00.771 [2024-11-25 23:29:32.925953] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:00.771 [2024-11-25 23:29:32.925960] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c284eb1b-bd51-42bd-a57e-41d00c4f73df 00:28:00.771 [2024-11-25 23:29:32.925966] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:00.771 [2024-11-25 23:29:32.925974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:00.771 [2024-11-25 23:29:32.925979] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:00.771 [2024-11-25 23:29:32.925988] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:00.771 [2024-11-25 23:29:32.925994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:00.771 [2024-11-25 23:29:32.926001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:00.771 [2024-11-25 23:29:32.926006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:00.771 [2024-11-25 23:29:32.926012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:00.771 [2024-11-25 23:29:32.926017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:00.771 [2024-11-25 23:29:32.926024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.771 [2024-11-25 23:29:32.926030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:00.771 [2024-11-25 23:29:32.926037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:28:00.771 [2024-11-25 23:29:32.926043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:32.935702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.771 [2024-11-25 23:29:32.935729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:00.771 [2024-11-25 23:29:32.935738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.576 ms 00:28:00.771 [2024-11-25 23:29:32.935745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:32.936016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.771 [2024-11-25 23:29:32.936027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:00.771 [2024-11-25 23:29:32.936035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:28:00.771 [2024-11-25 23:29:32.936040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:32.968792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:32.968902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:00.771 [2024-11-25 23:29:32.968917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:32.968924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:32.968972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:32.968978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:00.771 [2024-11-25 23:29:32.968986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:32.968991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:32.969044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:32.969053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:00.771 [2024-11-25 23:29:32.969080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:32.969089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:32.969111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:32.969118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:00.771 [2024-11-25 23:29:32.969126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:32.969131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.027762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.027796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:00.771 [2024-11-25 23:29:33.027806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.027812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:00.771 [2024-11-25 23:29:33.076350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:00.771 [2024-11-25 23:29:33.076439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:00.771 [2024-11-25 23:29:33.076497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:00.771 [2024-11-25 23:29:33.076588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:00.771 [2024-11-25 23:29:33.076639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:00.771 [2024-11-25 23:29:33.076687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.771 [2024-11-25 23:29:33.076740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:00.771 [2024-11-25 23:29:33.076747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.771 [2024-11-25 23:29:33.076752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.771 [2024-11-25 23:29:33.076863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.103 ms, result 0 00:28:00.771 true 00:28:00.771 23:29:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81271 00:28:00.771 23:29:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81271 00:28:00.771 23:29:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:01.031 [2024-11-25 23:29:33.160713] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:28:01.031 [2024-11-25 23:29:33.160841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81865 ] 00:28:01.031 [2024-11-25 23:29:33.318485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.290 [2024-11-25 23:29:33.406472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.226  [2024-11-25T23:29:35.971Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-25T23:29:36.908Z] Copying: 509/1024 [MB] (255 MBps) [2024-11-25T23:29:37.843Z] Copying: 760/1024 [MB] (251 MBps) [2024-11-25T23:29:37.843Z] Copying: 1013/1024 [MB] (252 MBps) [2024-11-25T23:29:38.410Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:28:06.041 00:28:06.041 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81271 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:06.041 23:29:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:06.041 [2024-11-25 23:29:38.246926] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:28:06.041 [2024-11-25 23:29:38.247186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81922 ] 00:28:06.041 [2024-11-25 23:29:38.402172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.300 [2024-11-25 23:29:38.491652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.579 [2024-11-25 23:29:38.699320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:06.580 [2024-11-25 23:29:38.699373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:06.580 [2024-11-25 23:29:38.762077] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:06.580 [2024-11-25 23:29:38.762342] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:06.580 [2024-11-25 23:29:38.762539] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:06.860 [2024-11-25 23:29:38.936588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.860 [2024-11-25 23:29:38.936712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:06.860 [2024-11-25 23:29:38.936728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:06.860 [2024-11-25 23:29:38.936740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.860 [2024-11-25 23:29:38.936780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.860 [2024-11-25 23:29:38.936789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:06.860 [2024-11-25 23:29:38.936795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:06.860 [2024-11-25 23:29:38.936801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.860 [2024-11-25 23:29:38.936816] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:06.860 [2024-11-25 23:29:38.937386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:06.860 [2024-11-25 23:29:38.937401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.860 [2024-11-25 23:29:38.937407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:06.860 [2024-11-25 23:29:38.937413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:28:06.860 [2024-11-25 23:29:38.937419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.938389] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:06.861 [2024-11-25 23:29:38.947976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.948100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:06.861 [2024-11-25 23:29:38.948119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.588 ms 00:28:06.861 [2024-11-25 23:29:38.948126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.948166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.948173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:06.861 [2024-11-25 23:29:38.948179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:06.861 [2024-11-25 23:29:38.948185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.952816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.952853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:06.861 [2024-11-25 23:29:38.952861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:28:06.861 [2024-11-25 23:29:38.952867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.952920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.952927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:06.861 [2024-11-25 23:29:38.952933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:06.861 [2024-11-25 23:29:38.952939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.952979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.952987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:06.861 [2024-11-25 23:29:38.952993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:06.861 [2024-11-25 23:29:38.952998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.953011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:06.861 [2024-11-25 23:29:38.955848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.955950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:06.861 [2024-11-25 23:29:38.955962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:28:06.861 [2024-11-25 23:29:38.955967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.955994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.956000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:06.861 [2024-11-25 23:29:38.956006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:06.861 [2024-11-25 23:29:38.956012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.956028] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:06.861 [2024-11-25 23:29:38.956042] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:06.861 [2024-11-25 23:29:38.956084] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:06.861 [2024-11-25 23:29:38.956102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:06.861 [2024-11-25 23:29:38.956185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:06.861 [2024-11-25 23:29:38.956193] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:06.861 [2024-11-25 23:29:38.956201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:06.861 [2024-11-25 23:29:38.956211] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956218] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956224] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:06.861 [2024-11-25 23:29:38.956229] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:06.861 [2024-11-25 23:29:38.956235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:06.861 [2024-11-25 23:29:38.956240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:06.861 [2024-11-25 23:29:38.956246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.956251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:06.861 [2024-11-25 23:29:38.956257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:28:06.861 [2024-11-25 23:29:38.956262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.956325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.861 [2024-11-25 23:29:38.956333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:06.861 [2024-11-25 23:29:38.956339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:06.861 [2024-11-25 23:29:38.956344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.861 [2024-11-25 23:29:38.956419] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:06.861 [2024-11-25 23:29:38.956427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:06.861 [2024-11-25 23:29:38.956433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:06.861 [2024-11-25 23:29:38.956450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:06.861 [2024-11-25 23:29:38.956466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:06.861 [2024-11-25 23:29:38.956481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:06.861 [2024-11-25 23:29:38.956486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:06.861 [2024-11-25 23:29:38.956491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:06.861 [2024-11-25 23:29:38.956496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:06.861 [2024-11-25 23:29:38.956503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:06.861 [2024-11-25 23:29:38.956509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:06.861 [2024-11-25 23:29:38.956518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:06.861 [2024-11-25 23:29:38.956534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:06.861 [2024-11-25 23:29:38.956549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:06.861 [2024-11-25 23:29:38.956564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:06.861 [2024-11-25 23:29:38.956578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:06.861 [2024-11-25 23:29:38.956583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.861 [2024-11-25 23:29:38.956588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:06.862 [2024-11-25 23:29:38.956593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:06.862 [2024-11-25 23:29:38.956597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:06.862 [2024-11-25 23:29:38.956602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:06.862 [2024-11-25 23:29:38.956607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:06.862 [2024-11-25 23:29:38.956612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:06.862 [2024-11-25 23:29:38.956617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:06.862 [2024-11-25 23:29:38.956622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:06.862 [2024-11-25 23:29:38.956627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.862 [2024-11-25 23:29:38.956632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:06.862 [2024-11-25 23:29:38.956637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:06.862 [2024-11-25 23:29:38.956641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.862 [2024-11-25 23:29:38.956646] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:06.862 [2024-11-25 23:29:38.956653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:06.862 [2024-11-25 23:29:38.956660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:06.862 [2024-11-25 23:29:38.956666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.862 [2024-11-25 23:29:38.956672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:06.862 [2024-11-25 23:29:38.956677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:06.862 [2024-11-25 23:29:38.956682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:06.862 [2024-11-25 23:29:38.956687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:06.862 [2024-11-25 23:29:38.956692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:06.862 [2024-11-25 23:29:38.956697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:06.862 [2024-11-25 23:29:38.956703] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:06.862 [2024-11-25 23:29:38.956710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:06.862 [2024-11-25 23:29:38.956722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:06.862 [2024-11-25 23:29:38.956727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:06.862 [2024-11-25 23:29:38.956732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:06.862 [2024-11-25 23:29:38.956738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:06.862 [2024-11-25 23:29:38.956743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:06.862 [2024-11-25 23:29:38.956749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:06.862 [2024-11-25 23:29:38.956754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:06.862 [2024-11-25 23:29:38.956759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:06.862 [2024-11-25 23:29:38.956764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:06.862 [2024-11-25 23:29:38.956790] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:06.862 [2024-11-25 23:29:38.956796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:06.862 [2024-11-25 23:29:38.956807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:06.862 [2024-11-25 23:29:38.956813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:06.862 [2024-11-25 23:29:38.956818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:06.862 [2024-11-25 23:29:38.956823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:38.956830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:06.862 [2024-11-25 23:29:38.956854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:28:06.862 [2024-11-25 23:29:38.956860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:38.978304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:38.978400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:06.862 [2024-11-25 23:29:38.978438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.400 ms 00:28:06.862 [2024-11-25 23:29:38.978456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:38.978534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:38.978550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:06.862 [2024-11-25 23:29:38.978564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:06.862 [2024-11-25 23:29:38.978578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:39.015386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:39.015496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:06.862 [2024-11-25 23:29:39.015546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.759 ms 00:28:06.862 [2024-11-25 23:29:39.015564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:39.015606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:39.015624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:06.862 [2024-11-25 23:29:39.015640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:06.862 [2024-11-25 23:29:39.015654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:39.015981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:39.016069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:06.862 [2024-11-25 23:29:39.016113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:06.862 [2024-11-25 23:29:39.016134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:39.016321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:39.016369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:06.862 [2024-11-25 23:29:39.016405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:06.862 [2024-11-25 23:29:39.016422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.862 [2024-11-25 23:29:39.026951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.862 [2024-11-25 23:29:39.027041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:06.862 [2024-11-25 23:29:39.027104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.504 ms 00:28:06.862 [2024-11-25 23:29:39.027124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.036889] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:06.863 [2024-11-25 23:29:39.036992] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:06.863 [2024-11-25 23:29:39.037041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.037097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:06.863 [2024-11-25 23:29:39.037118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.820 ms 00:28:06.863 [2024-11-25 23:29:39.037152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.055935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.056029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:06.863 [2024-11-25 23:29:39.056086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.716 ms 00:28:06.863 [2024-11-25 23:29:39.056109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.065217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.065305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:06.863 [2024-11-25 23:29:39.065392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.079 ms 00:28:06.863 [2024-11-25 23:29:39.065408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.074052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.074150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:06.863 [2024-11-25 23:29:39.074238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.613 ms 00:28:06.863 [2024-11-25 23:29:39.074262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.074756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.074820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:06.863 [2024-11-25 23:29:39.074856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:28:06.863 [2024-11-25 23:29:39.074873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.118878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.119022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:06.863 [2024-11-25 23:29:39.119085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.980 ms 00:28:06.863 [2024-11-25 23:29:39.119114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.126936] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:06.863 [2024-11-25 23:29:39.128747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.128842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:06.863 [2024-11-25 23:29:39.128894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.564 ms 00:28:06.863 [2024-11-25 23:29:39.128918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.128986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.129043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:06.863 [2024-11-25 23:29:39.129081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:06.863 [2024-11-25 23:29:39.129102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.129186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.129206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:06.863 [2024-11-25 23:29:39.129222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:06.863 [2024-11-25 23:29:39.129266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.129299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.129316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:06.863 [2024-11-25 23:29:39.129331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:06.863 [2024-11-25 23:29:39.129367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.129412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:06.863 [2024-11-25 23:29:39.129431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.129532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:06.863 [2024-11-25 23:29:39.129549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:06.863 [2024-11-25 23:29:39.129566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.147343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.147434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:06.863 [2024-11-25 23:29:39.147474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.751 ms 00:28:06.863 [2024-11-25 23:29:39.147491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.147553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.863 [2024-11-25 23:29:39.147678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:06.863 [2024-11-25 23:29:39.147704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:06.863 [2024-11-25 23:29:39.147719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.863 [2024-11-25 23:29:39.148544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.630 ms, result 0 00:28:07.813  [2024-11-25T23:29:41.571Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-25T23:29:42.516Z] Copying: 35/1024 [MB] (12 MBps) [2024-11-25T23:29:43.461Z] Copying: 48/1024 [MB] (13 MBps) [2024-11-25T23:29:44.404Z] Copying: 62/1024 [MB] (13 MBps) [2024-11-25T23:29:45.347Z] Copying: 77/1024 [MB] (14 MBps) [2024-11-25T23:29:46.289Z] Copying: 99/1024 [MB] (21 MBps) [2024-11-25T23:29:47.234Z] Copying: 121/1024 [MB] (21 MBps) [2024-11-25T23:29:48.178Z] Copying: 144/1024 [MB] (23 MBps) [2024-11-25T23:29:49.568Z] Copying: 162/1024 [MB] (17 MBps) [2024-11-25T23:29:50.512Z] Copying: 177/1024 [MB] (15 MBps) [2024-11-25T23:29:51.456Z] Copying: 194/1024 [MB] (16 MBps) [2024-11-25T23:29:52.403Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-25T23:29:53.348Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-25T23:29:54.291Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-25T23:29:55.224Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-25T23:29:56.605Z] Copying: 279/1024 [MB] (43 MBps) [2024-11-25T23:29:57.177Z] Copying: 307/1024 [MB] (28 MBps) [2024-11-25T23:29:58.562Z] Copying: 324/1024 [MB] (17 MBps) [2024-11-25T23:29:59.505Z] Copying: 341/1024 [MB] (17 MBps) [2024-11-25T23:30:00.450Z] Copying: 361/1024 [MB] (19 MBps) [2024-11-25T23:30:01.394Z] Copying: 381/1024 [MB] (20 MBps) [2024-11-25T23:30:02.338Z] Copying: 401/1024 [MB] (19 MBps) [2024-11-25T23:30:03.281Z] Copying: 421/1024 [MB] (20 MBps) [2024-11-25T23:30:04.222Z] Copying: 443/1024 [MB] (21 MBps) [2024-11-25T23:30:05.168Z] Copying: 463/1024 [MB] (20 MBps) [2024-11-25T23:30:06.556Z] Copying: 484/1024 [MB] (20 MBps) [2024-11-25T23:30:07.501Z] Copying: 501/1024 [MB] (17 MBps) [2024-11-25T23:30:08.508Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-25T23:30:09.452Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-25T23:30:10.397Z] Copying: 535/1024 [MB] (10 MBps) [2024-11-25T23:30:11.341Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-25T23:30:12.286Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-25T23:30:13.230Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-25T23:30:14.174Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-25T23:30:15.559Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-25T23:30:16.499Z] Copying: 605/1024 [MB] (14 MBps) [2024-11-25T23:30:17.440Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-25T23:30:18.383Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-25T23:30:19.328Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-25T23:30:20.280Z] Copying: 646/1024 [MB] (10 MBps) [2024-11-25T23:30:21.225Z] Copying: 672592/1048576 [kB] (10164 kBps) [2024-11-25T23:30:22.170Z] Copying: 682792/1048576 [kB] (10200 kBps) [2024-11-25T23:30:23.558Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-25T23:30:24.504Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-25T23:30:25.448Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-25T23:30:26.392Z] Copying: 711/1024 [MB] (11 MBps) [2024-11-25T23:30:27.338Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-25T23:30:28.284Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-25T23:30:29.228Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-25T23:30:30.171Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-25T23:30:31.558Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-25T23:30:32.501Z] Copying: 778/1024 [MB] (11 MBps) [2024-11-25T23:30:33.444Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-25T23:30:34.388Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-25T23:30:35.330Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-25T23:30:36.275Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-25T23:30:37.269Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-25T23:30:38.215Z] Copying: 860768/1048576 [kB] (10208 kBps) [2024-11-25T23:30:39.607Z] Copying: 851/1024 [MB] (11 MBps) [2024-11-25T23:30:40.183Z] Copying: 862/1024 [MB] (10 MBps) [2024-11-25T23:30:41.572Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-25T23:30:42.517Z] Copying: 883/1024 [MB] (10 MBps) [2024-11-25T23:30:43.463Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-25T23:30:44.410Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-25T23:30:45.356Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-25T23:30:46.301Z] Copying: 948472/1048576 [kB] (10192 kBps) [2024-11-25T23:30:47.248Z] Copying: 937/1024 [MB] (11 MBps) [2024-11-25T23:30:48.195Z] Copying: 948/1024 [MB] (11 MBps) [2024-11-25T23:30:49.582Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-25T23:30:50.526Z] Copying: 970/1024 [MB] (10 MBps) [2024-11-25T23:30:51.462Z] Copying: 980/1024 [MB] (10 MBps) [2024-11-25T23:30:52.405Z] Copying: 1014/1024 [MB] (33 MBps) [2024-11-25T23:30:52.977Z] Copying: 1047836/1048576 [kB] (9348 kBps) [2024-11-25T23:30:52.977Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-25 23:30:52.882284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.882339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:20.608 [2024-11-25 23:30:52.882352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:20.608 [2024-11-25 23:30:52.882361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.608 [2024-11-25 23:30:52.885677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:20.608 [2024-11-25 23:30:52.891225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.891259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:20.608 [2024-11-25 23:30:52.891270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.424 ms 00:29:20.608 [2024-11-25 23:30:52.891285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.608 [2024-11-25 23:30:52.903377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.903512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:20.608 [2024-11-25 23:30:52.903531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.854 ms 00:29:20.608 [2024-11-25 23:30:52.903540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.608 [2024-11-25 23:30:52.924981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.925017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:20.608 [2024-11-25 23:30:52.925028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.422 ms 00:29:20.608 [2024-11-25 23:30:52.925035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.608 [2024-11-25 23:30:52.931178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.931212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:20.608 [2024-11-25 23:30:52.931222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.107 ms 00:29:20.608 [2024-11-25 23:30:52.931231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.608 [2024-11-25 23:30:52.955439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.955469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:20.608 [2024-11-25 23:30:52.955480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.172 ms 00:29:20.608 [2024-11-25 23:30:52.955487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.608 [2024-11-25 23:30:52.969118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.608 [2024-11-25 23:30:52.969147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:20.608 [2024-11-25 23:30:52.969157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.600 ms 00:29:20.608 [2024-11-25 23:30:52.969165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.183 [2024-11-25 23:30:53.238009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.183 [2024-11-25 23:30:53.238050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:21.183 [2024-11-25 23:30:53.238079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 268.808 ms 00:29:21.183 [2024-11-25 23:30:53.238087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.183 [2024-11-25 23:30:53.262337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.183 [2024-11-25 23:30:53.262377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:21.183 [2024-11-25 23:30:53.262389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.234 ms 00:29:21.183 [2024-11-25 23:30:53.262407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.183 [2024-11-25 23:30:53.286933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.183 [2024-11-25 23:30:53.286976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:21.183 [2024-11-25 23:30:53.286988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:29:21.183 [2024-11-25 23:30:53.286996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.183 [2024-11-25 23:30:53.311400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.184 [2024-11-25 23:30:53.311444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:21.184 [2024-11-25 23:30:53.311457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.358 ms 00:29:21.184 [2024-11-25 23:30:53.311464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.184 [2024-11-25 23:30:53.336006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.184 [2024-11-25 23:30:53.336051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:21.184 [2024-11-25 23:30:53.336079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.469 ms 00:29:21.184 [2024-11-25 23:30:53.336087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.184 [2024-11-25 23:30:53.336131] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:21.184 [2024-11-25 23:30:53.336147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102912 / 261120 wr_cnt: 1 state: open 00:29:21.184 [2024-11-25 23:30:53.336158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:21.184 [2024-11-25 23:30:53.336803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:21.185 [2024-11-25 23:30:53.336973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:21.185 [2024-11-25 23:30:53.336982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c284eb1b-bd51-42bd-a57e-41d00c4f73df 00:29:21.185 [2024-11-25 23:30:53.337004] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102912 00:29:21.185 [2024-11-25 23:30:53.337012] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103872 00:29:21.185 [2024-11-25 23:30:53.337020] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102912 00:29:21.185 [2024-11-25 23:30:53.337029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:29:21.185 [2024-11-25 23:30:53.337036] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:21.185 [2024-11-25 23:30:53.337045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:21.185 [2024-11-25 23:30:53.337065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:21.185 [2024-11-25 23:30:53.337073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:21.185 [2024-11-25 23:30:53.337080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:21.185 [2024-11-25 23:30:53.337088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.185 [2024-11-25 23:30:53.337097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:21.185 [2024-11-25 23:30:53.337105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:29:21.185 [2024-11-25 23:30:53.337114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.350880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.185 [2024-11-25 23:30:53.350922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:21.185 [2024-11-25 23:30:53.350933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.747 ms 00:29:21.185 [2024-11-25 23:30:53.350941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.351379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.185 [2024-11-25 23:30:53.351396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:21.185 [2024-11-25 23:30:53.351405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:29:21.185 [2024-11-25 23:30:53.351420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.387849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.387897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:21.185 [2024-11-25 23:30:53.387908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.387917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.387983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.387992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:21.185 [2024-11-25 23:30:53.388000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.388015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.388115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.388128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:21.185 [2024-11-25 23:30:53.388137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.388145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.388161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.388169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:21.185 [2024-11-25 23:30:53.388177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.388185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.472981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.473032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:21.185 [2024-11-25 23:30:53.473044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.473053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.543924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.543977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:21.185 [2024-11-25 23:30:53.543989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.544113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.544125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:21.185 [2024-11-25 23:30:53.544134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.544214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.544224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:21.185 [2024-11-25 23:30:53.544234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.544352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.544364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:21.185 [2024-11-25 23:30:53.544373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.544413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.544423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:21.185 [2024-11-25 23:30:53.544431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.544484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.544494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:21.185 [2024-11-25 23:30:53.544503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.185 [2024-11-25 23:30:53.544561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.185 [2024-11-25 23:30:53.544572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:21.185 [2024-11-25 23:30:53.544581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.185 [2024-11-25 23:30:53.544589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.466 [2024-11-25 23:30:53.544727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 664.369 ms, result 0 00:29:22.854 00:29:22.854 00:29:22.854 23:30:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:25.400 23:30:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:25.400 [2024-11-25 23:30:57.270926] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:29:25.400 [2024-11-25 23:30:57.271025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82719 ] 00:29:25.400 [2024-11-25 23:30:57.426854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.400 [2024-11-25 23:30:57.541357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.661 [2024-11-25 23:30:57.834077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:25.661 [2024-11-25 23:30:57.834152] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:25.661 [2024-11-25 23:30:57.996246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:57.996480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:25.661 [2024-11-25 23:30:57.996506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:25.661 [2024-11-25 23:30:57.996515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:57.996588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:57.996603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:25.661 [2024-11-25 23:30:57.996612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:25.661 [2024-11-25 23:30:57.996621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:57.996643] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:25.661 [2024-11-25 23:30:57.997414] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:25.661 [2024-11-25 23:30:57.997437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:57.997445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:25.661 [2024-11-25 23:30:57.997456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:29:25.661 [2024-11-25 23:30:57.997464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:57.999146] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:25.661 [2024-11-25 23:30:58.013234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:58.013282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:25.661 [2024-11-25 23:30:58.013296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.091 ms 00:29:25.661 [2024-11-25 23:30:58.013305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:58.013387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:58.013398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:25.661 [2024-11-25 23:30:58.013407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:25.661 [2024-11-25 23:30:58.013420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:58.021824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:58.021998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:25.661 [2024-11-25 23:30:58.022016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.324 ms 00:29:25.661 [2024-11-25 23:30:58.022031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:58.022136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:58.022147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:25.661 [2024-11-25 23:30:58.022156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:25.661 [2024-11-25 23:30:58.022164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:58.022210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.661 [2024-11-25 23:30:58.022220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:25.661 [2024-11-25 23:30:58.022229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:25.661 [2024-11-25 23:30:58.022237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.661 [2024-11-25 23:30:58.022263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:25.924 [2024-11-25 23:30:58.026230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.924 [2024-11-25 23:30:58.026268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:25.924 [2024-11-25 23:30:58.026282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:29:25.924 [2024-11-25 23:30:58.026291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.924 [2024-11-25 23:30:58.026326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.924 [2024-11-25 23:30:58.026335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:25.924 [2024-11-25 23:30:58.026344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:25.924 [2024-11-25 23:30:58.026352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.924 [2024-11-25 23:30:58.026404] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:25.924 [2024-11-25 23:30:58.026428] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:25.924 [2024-11-25 23:30:58.026466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:25.924 [2024-11-25 23:30:58.026485] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:25.924 [2024-11-25 23:30:58.026591] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:25.924 [2024-11-25 23:30:58.026603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:25.924 [2024-11-25 23:30:58.026614] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:25.924 [2024-11-25 23:30:58.026625] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:25.924 [2024-11-25 23:30:58.026635] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:25.924 [2024-11-25 23:30:58.026644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:25.924 [2024-11-25 23:30:58.026652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:25.924 [2024-11-25 23:30:58.026660] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:25.924 [2024-11-25 23:30:58.026671] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:25.924 [2024-11-25 23:30:58.026680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.924 [2024-11-25 23:30:58.026688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:25.924 [2024-11-25 23:30:58.026696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:29:25.924 [2024-11-25 23:30:58.026704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.924 [2024-11-25 23:30:58.026788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.924 [2024-11-25 23:30:58.026796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:25.924 [2024-11-25 23:30:58.026804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:25.924 [2024-11-25 23:30:58.026812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.924 [2024-11-25 23:30:58.026918] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:25.924 [2024-11-25 23:30:58.026930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:25.924 [2024-11-25 23:30:58.026939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:25.924 [2024-11-25 23:30:58.026947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.924 [2024-11-25 23:30:58.026955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:25.924 [2024-11-25 23:30:58.026963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:25.924 [2024-11-25 23:30:58.026971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:25.924 [2024-11-25 23:30:58.027174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:25.924 [2024-11-25 23:30:58.027184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:25.924 [2024-11-25 23:30:58.027198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:25.924 [2024-11-25 23:30:58.027213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:25.924 [2024-11-25 23:30:58.027220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:25.924 [2024-11-25 23:30:58.027235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:25.924 [2024-11-25 23:30:58.027242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:25.924 [2024-11-25 23:30:58.027249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:25.924 [2024-11-25 23:30:58.027263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:25.924 [2024-11-25 23:30:58.027271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:25.924 [2024-11-25 23:30:58.027286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.924 [2024-11-25 23:30:58.027303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:25.924 [2024-11-25 23:30:58.027311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.924 [2024-11-25 23:30:58.027325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:25.924 [2024-11-25 23:30:58.027332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.924 [2024-11-25 23:30:58.027346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:25.924 [2024-11-25 23:30:58.027354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:25.924 [2024-11-25 23:30:58.027369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:25.924 [2024-11-25 23:30:58.027376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:25.924 [2024-11-25 23:30:58.027390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:25.924 [2024-11-25 23:30:58.027396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:25.924 [2024-11-25 23:30:58.027404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:25.924 [2024-11-25 23:30:58.027411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:25.924 [2024-11-25 23:30:58.027418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:25.924 [2024-11-25 23:30:58.027424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.924 [2024-11-25 23:30:58.027431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:25.924 [2024-11-25 23:30:58.027438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:25.924 [2024-11-25 23:30:58.027445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.925 [2024-11-25 23:30:58.027453] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:25.925 [2024-11-25 23:30:58.027461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:25.925 [2024-11-25 23:30:58.027468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:25.925 [2024-11-25 23:30:58.027475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:25.925 [2024-11-25 23:30:58.027483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:25.925 [2024-11-25 23:30:58.027490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:25.925 [2024-11-25 23:30:58.027497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:25.925 [2024-11-25 23:30:58.027503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:25.925 [2024-11-25 23:30:58.027510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:25.925 [2024-11-25 23:30:58.027516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:25.925 [2024-11-25 23:30:58.027526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:25.925 [2024-11-25 23:30:58.027536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:25.925 [2024-11-25 23:30:58.027555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:25.925 [2024-11-25 23:30:58.027562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:25.925 [2024-11-25 23:30:58.027569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:25.925 [2024-11-25 23:30:58.027577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:25.925 [2024-11-25 23:30:58.027585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:25.925 [2024-11-25 23:30:58.027592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:25.925 [2024-11-25 23:30:58.027599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:25.925 [2024-11-25 23:30:58.027606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:25.925 [2024-11-25 23:30:58.027613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:25.925 [2024-11-25 23:30:58.027649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:25.925 [2024-11-25 23:30:58.027657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:25.925 [2024-11-25 23:30:58.027673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:25.925 [2024-11-25 23:30:58.027681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:25.925 [2024-11-25 23:30:58.027689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:25.925 [2024-11-25 23:30:58.027697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.027705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:25.925 [2024-11-25 23:30:58.027713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:29:25.925 [2024-11-25 23:30:58.027720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.059728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.059779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:25.925 [2024-11-25 23:30:58.059792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.963 ms 00:29:25.925 [2024-11-25 23:30:58.059804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.059896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.059904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:25.925 [2024-11-25 23:30:58.059913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:25.925 [2024-11-25 23:30:58.059922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.108558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.108614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:25.925 [2024-11-25 23:30:58.108628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.574 ms 00:29:25.925 [2024-11-25 23:30:58.108637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.108688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.108698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:25.925 [2024-11-25 23:30:58.108713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:25.925 [2024-11-25 23:30:58.108721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.109381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.109407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:25.925 [2024-11-25 23:30:58.109417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:29:25.925 [2024-11-25 23:30:58.109426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.109588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.109599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:25.925 [2024-11-25 23:30:58.109613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:29:25.925 [2024-11-25 23:30:58.109621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.125191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.125233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:25.925 [2024-11-25 23:30:58.125248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.549 ms 00:29:25.925 [2024-11-25 23:30:58.125256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.139315] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:25.925 [2024-11-25 23:30:58.139502] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:25.925 [2024-11-25 23:30:58.139522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.139531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:25.925 [2024-11-25 23:30:58.139541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.154 ms 00:29:25.925 [2024-11-25 23:30:58.139549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.165191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.165240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:25.925 [2024-11-25 23:30:58.165252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.598 ms 00:29:25.925 [2024-11-25 23:30:58.165261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.178102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.178285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:25.925 [2024-11-25 23:30:58.178305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:29:25.925 [2024-11-25 23:30:58.178314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.190838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.190883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:25.925 [2024-11-25 23:30:58.190895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.483 ms 00:29:25.925 [2024-11-25 23:30:58.190903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.191573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.191597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:25.925 [2024-11-25 23:30:58.191610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:29:25.925 [2024-11-25 23:30:58.191618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.255564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.255629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:25.925 [2024-11-25 23:30:58.255651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.924 ms 00:29:25.925 [2024-11-25 23:30:58.255661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.266899] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:25.925 [2024-11-25 23:30:58.269984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.270174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:25.925 [2024-11-25 23:30:58.270194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.264 ms 00:29:25.925 [2024-11-25 23:30:58.270204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.270291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.270303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:25.925 [2024-11-25 23:30:58.270312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:25.925 [2024-11-25 23:30:58.270324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.925 [2024-11-25 23:30:58.272011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.925 [2024-11-25 23:30:58.272076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:25.926 [2024-11-25 23:30:58.272088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:29:25.926 [2024-11-25 23:30:58.272098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.926 [2024-11-25 23:30:58.272127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.926 [2024-11-25 23:30:58.272137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:25.926 [2024-11-25 23:30:58.272146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:25.926 [2024-11-25 23:30:58.272155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.926 [2024-11-25 23:30:58.272201] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:25.926 [2024-11-25 23:30:58.272213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.926 [2024-11-25 23:30:58.272222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:25.926 [2024-11-25 23:30:58.272231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:25.926 [2024-11-25 23:30:58.272239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.186 [2024-11-25 23:30:58.298377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.186 [2024-11-25 23:30:58.298427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:26.186 [2024-11-25 23:30:58.298441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.119 ms 00:29:26.186 [2024-11-25 23:30:58.298456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.186 [2024-11-25 23:30:58.298546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.186 [2024-11-25 23:30:58.298557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:26.186 [2024-11-25 23:30:58.298567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:26.186 [2024-11-25 23:30:58.298576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.186 [2024-11-25 23:30:58.299828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.097 ms, result 0 00:29:27.126  [2024-11-25T23:31:00.880Z] Copying: 1152/1048576 [kB] (1152 kBps) [2024-11-25T23:31:01.822Z] Copying: 4392/1048576 [kB] (3240 kBps) [2024-11-25T23:31:02.767Z] Copying: 18/1024 [MB] (13 MBps) [2024-11-25T23:31:03.709Z] Copying: 37/1024 [MB] (19 MBps) [2024-11-25T23:31:04.646Z] Copying: 58/1024 [MB] (21 MBps) [2024-11-25T23:31:05.592Z] Copying: 117/1024 [MB] (58 MBps) [2024-11-25T23:31:06.598Z] Copying: 133/1024 [MB] (16 MBps) [2024-11-25T23:31:07.560Z] Copying: 160/1024 [MB] (26 MBps) [2024-11-25T23:31:08.503Z] Copying: 186/1024 [MB] (26 MBps) [2024-11-25T23:31:09.886Z] Copying: 207/1024 [MB] (20 MBps) [2024-11-25T23:31:10.825Z] Copying: 237/1024 [MB] (29 MBps) [2024-11-25T23:31:11.765Z] Copying: 272/1024 [MB] (35 MBps) [2024-11-25T23:31:12.706Z] Copying: 301/1024 [MB] (28 MBps) [2024-11-25T23:31:13.648Z] Copying: 330/1024 [MB] (29 MBps) [2024-11-25T23:31:14.588Z] Copying: 364/1024 [MB] (34 MBps) [2024-11-25T23:31:15.526Z] Copying: 394/1024 [MB] (30 MBps) [2024-11-25T23:31:16.909Z] Copying: 431/1024 [MB] (37 MBps) [2024-11-25T23:31:17.855Z] Copying: 463/1024 [MB] (31 MBps) [2024-11-25T23:31:18.796Z] Copying: 494/1024 [MB] (31 MBps) [2024-11-25T23:31:19.737Z] Copying: 530/1024 [MB] (36 MBps) [2024-11-25T23:31:20.679Z] Copying: 563/1024 [MB] (32 MBps) [2024-11-25T23:31:21.623Z] Copying: 592/1024 [MB] (29 MBps) [2024-11-25T23:31:22.568Z] Copying: 622/1024 [MB] (29 MBps) [2024-11-25T23:31:23.512Z] Copying: 650/1024 [MB] (28 MBps) [2024-11-25T23:31:24.898Z] Copying: 673/1024 [MB] (23 MBps) [2024-11-25T23:31:25.844Z] Copying: 698/1024 [MB] (24 MBps) [2024-11-25T23:31:26.788Z] Copying: 727/1024 [MB] (28 MBps) [2024-11-25T23:31:27.733Z] Copying: 754/1024 [MB] (27 MBps) [2024-11-25T23:31:28.674Z] Copying: 783/1024 [MB] (28 MBps) [2024-11-25T23:31:29.620Z] Copying: 814/1024 [MB] (31 MBps) [2024-11-25T23:31:30.565Z] Copying: 842/1024 [MB] (28 MBps) [2024-11-25T23:31:31.510Z] Copying: 870/1024 [MB] (27 MBps) [2024-11-25T23:31:32.912Z] Copying: 899/1024 [MB] (29 MBps) [2024-11-25T23:31:33.854Z] Copying: 916/1024 [MB] (16 MBps) [2024-11-25T23:31:34.809Z] Copying: 941/1024 [MB] (25 MBps) [2024-11-25T23:31:35.833Z] Copying: 957/1024 [MB] (15 MBps) [2024-11-25T23:31:36.777Z] Copying: 985/1024 [MB] (28 MBps) [2024-11-25T23:31:36.777Z] Copying: 1015/1024 [MB] (30 MBps) [2024-11-25T23:31:37.349Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-25 23:31:37.071596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.071779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:04.980 [2024-11-25 23:31:37.071800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:04.980 [2024-11-25 23:31:37.071810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.071839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:04.980 [2024-11-25 23:31:37.075485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.075539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:04.980 [2024-11-25 23:31:37.075552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:30:04.980 [2024-11-25 23:31:37.075561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.075839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.075857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:04.980 [2024-11-25 23:31:37.075868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:30:04.980 [2024-11-25 23:31:37.075877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.092193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.092376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:04.980 [2024-11-25 23:31:37.092685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.292 ms 00:30:04.980 [2024-11-25 23:31:37.093023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.099800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.099978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:04.980 [2024-11-25 23:31:37.100132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.649 ms 00:30:04.980 [2024-11-25 23:31:37.100166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.127992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.128190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:04.980 [2024-11-25 23:31:37.128321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.726 ms 00:30:04.980 [2024-11-25 23:31:37.128347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.145670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.145839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:04.980 [2024-11-25 23:31:37.146022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.272 ms 00:30:04.980 [2024-11-25 23:31:37.146098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.151340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.151530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:04.980 [2024-11-25 23:31:37.151708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.173 ms 00:30:04.980 [2024-11-25 23:31:37.151749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.177685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.177861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:04.980 [2024-11-25 23:31:37.177922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.886 ms 00:30:04.980 [2024-11-25 23:31:37.177945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.203148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.203306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:04.980 [2024-11-25 23:31:37.203364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.040 ms 00:30:04.980 [2024-11-25 23:31:37.203386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.227534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.227690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:04.980 [2024-11-25 23:31:37.227751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.039 ms 00:30:04.980 [2024-11-25 23:31:37.227773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.252544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.980 [2024-11-25 23:31:37.252698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:04.980 [2024-11-25 23:31:37.252757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.617 ms 00:30:04.980 [2024-11-25 23:31:37.252780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.980 [2024-11-25 23:31:37.252827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:04.980 [2024-11-25 23:31:37.252857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:04.980 [2024-11-25 23:31:37.252906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:04.980 [2024-11-25 23:31:37.252936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.252966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:04.980 [2024-11-25 23:31:37.253795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.253994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:04.981 [2024-11-25 23:31:37.254471] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:04.982 [2024-11-25 23:31:37.254481] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c284eb1b-bd51-42bd-a57e-41d00c4f73df 00:30:04.982 [2024-11-25 23:31:37.254490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:04.982 [2024-11-25 23:31:37.254499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161728 00:30:04.982 [2024-11-25 23:31:37.254512] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159744 00:30:04.982 [2024-11-25 23:31:37.254522] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:30:04.982 [2024-11-25 23:31:37.254530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:04.982 [2024-11-25 23:31:37.254548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:04.982 [2024-11-25 23:31:37.254557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:04.982 [2024-11-25 23:31:37.254564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:04.982 [2024-11-25 23:31:37.254572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:04.982 [2024-11-25 23:31:37.254581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.982 [2024-11-25 23:31:37.254590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:04.982 [2024-11-25 23:31:37.254600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:30:04.982 [2024-11-25 23:31:37.254608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.982 [2024-11-25 23:31:37.269362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.982 [2024-11-25 23:31:37.269505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:04.982 [2024-11-25 23:31:37.269557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.730 ms 00:30:04.982 [2024-11-25 23:31:37.269581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.982 [2024-11-25 23:31:37.270030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.982 [2024-11-25 23:31:37.270160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:04.982 [2024-11-25 23:31:37.270268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:30:04.982 [2024-11-25 23:31:37.270297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.982 [2024-11-25 23:31:37.309788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.982 [2024-11-25 23:31:37.309948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.982 [2024-11-25 23:31:37.310008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.982 [2024-11-25 23:31:37.310033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.982 [2024-11-25 23:31:37.310123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.982 [2024-11-25 23:31:37.310148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.982 [2024-11-25 23:31:37.310173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.982 [2024-11-25 23:31:37.310192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.982 [2024-11-25 23:31:37.310309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.982 [2024-11-25 23:31:37.310390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.982 [2024-11-25 23:31:37.310412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.982 [2024-11-25 23:31:37.310431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.982 [2024-11-25 23:31:37.310460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.982 [2024-11-25 23:31:37.310482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.982 [2024-11-25 23:31:37.310502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.982 [2024-11-25 23:31:37.310829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.401576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.401733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.244 [2024-11-25 23:31:37.401753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.401762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:05.244 [2024-11-25 23:31:37.475182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:05.244 [2024-11-25 23:31:37.475301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:05.244 [2024-11-25 23:31:37.475405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:05.244 [2024-11-25 23:31:37.475547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:05.244 [2024-11-25 23:31:37.475624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:05.244 [2024-11-25 23:31:37.475718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.244 [2024-11-25 23:31:37.475800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:05.244 [2024-11-25 23:31:37.475810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.244 [2024-11-25 23:31:37.475820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.244 [2024-11-25 23:31:37.475985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.341 ms, result 0 00:30:06.190 00:30:06.190 00:30:06.190 23:31:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.104 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:08.104 23:31:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.104 [2024-11-25 23:31:40.366186] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:30:08.104 [2024-11-25 23:31:40.366287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83150 ] 00:30:08.364 [2024-11-25 23:31:40.521750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.364 [2024-11-25 23:31:40.626859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.625 [2024-11-25 23:31:40.922170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.625 [2024-11-25 23:31:40.922251] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.887 [2024-11-25 23:31:41.086550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.887 [2024-11-25 23:31:41.086621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.887 [2024-11-25 23:31:41.086636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:08.887 [2024-11-25 23:31:41.086645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.887 [2024-11-25 23:31:41.086704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.887 [2024-11-25 23:31:41.086717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.887 [2024-11-25 23:31:41.086727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:08.887 [2024-11-25 23:31:41.086735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.887 [2024-11-25 23:31:41.086756] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.887 [2024-11-25 23:31:41.087536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.887 [2024-11-25 23:31:41.087559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.887 [2024-11-25 23:31:41.087568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.887 [2024-11-25 23:31:41.087577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:30:08.887 [2024-11-25 23:31:41.087585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.887 [2024-11-25 23:31:41.089407] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:08.887 [2024-11-25 23:31:41.104040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.104256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:08.888 [2024-11-25 23:31:41.104279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.635 ms 00:30:08.888 [2024-11-25 23:31:41.104288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.104695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.104738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:08.888 [2024-11-25 23:31:41.104750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:08.888 [2024-11-25 23:31:41.104759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.113295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.113347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.888 [2024-11-25 23:31:41.113359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.437 ms 00:30:08.888 [2024-11-25 23:31:41.113375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.113463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.113473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.888 [2024-11-25 23:31:41.113482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:08.888 [2024-11-25 23:31:41.113490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.113537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.113546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.888 [2024-11-25 23:31:41.113554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:08.888 [2024-11-25 23:31:41.113562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.113591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.888 [2024-11-25 23:31:41.117978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.118020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.888 [2024-11-25 23:31:41.118036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.394 ms 00:30:08.888 [2024-11-25 23:31:41.118045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.118097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.118107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.888 [2024-11-25 23:31:41.118115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:08.888 [2024-11-25 23:31:41.118123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.118177] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:08.888 [2024-11-25 23:31:41.118201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:08.888 [2024-11-25 23:31:41.118239] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:08.888 [2024-11-25 23:31:41.118259] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:08.888 [2024-11-25 23:31:41.118367] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.888 [2024-11-25 23:31:41.118377] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.888 [2024-11-25 23:31:41.118388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:08.888 [2024-11-25 23:31:41.118399] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118409] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118417] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.888 [2024-11-25 23:31:41.118425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.888 [2024-11-25 23:31:41.118433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.888 [2024-11-25 23:31:41.118444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.888 [2024-11-25 23:31:41.118452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.118461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.888 [2024-11-25 23:31:41.118469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:30:08.888 [2024-11-25 23:31:41.118477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.118560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.888 [2024-11-25 23:31:41.118568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.888 [2024-11-25 23:31:41.118575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:08.888 [2024-11-25 23:31:41.118582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.888 [2024-11-25 23:31:41.118690] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.888 [2024-11-25 23:31:41.118700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.888 [2024-11-25 23:31:41.118709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.888 [2024-11-25 23:31:41.118732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.888 [2024-11-25 23:31:41.118755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.888 [2024-11-25 23:31:41.118769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.888 [2024-11-25 23:31:41.118776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.888 [2024-11-25 23:31:41.118784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.888 [2024-11-25 23:31:41.118798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.888 [2024-11-25 23:31:41.118805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.888 [2024-11-25 23:31:41.118812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.888 [2024-11-25 23:31:41.118825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.888 [2024-11-25 23:31:41.118845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.888 [2024-11-25 23:31:41.118866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.888 [2024-11-25 23:31:41.118886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.888 [2024-11-25 23:31:41.118907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.888 [2024-11-25 23:31:41.118919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.888 [2024-11-25 23:31:41.118926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.888 [2024-11-25 23:31:41.118934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.888 [2024-11-25 23:31:41.118957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.888 [2024-11-25 23:31:41.118964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.888 [2024-11-25 23:31:41.118970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.888 [2024-11-25 23:31:41.118977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.888 [2024-11-25 23:31:41.118983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.888 [2024-11-25 23:31:41.118990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.119003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.888 [2024-11-25 23:31:41.119011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.888 [2024-11-25 23:31:41.119018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.119024] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.888 [2024-11-25 23:31:41.119033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.888 [2024-11-25 23:31:41.119041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.888 [2024-11-25 23:31:41.119049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.888 [2024-11-25 23:31:41.119073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.888 [2024-11-25 23:31:41.119080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.888 [2024-11-25 23:31:41.119087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.888 [2024-11-25 23:31:41.119095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.888 [2024-11-25 23:31:41.119102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.888 [2024-11-25 23:31:41.119109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.888 [2024-11-25 23:31:41.119117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.888 [2024-11-25 23:31:41.119127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.889 [2024-11-25 23:31:41.119146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.889 [2024-11-25 23:31:41.119154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.889 [2024-11-25 23:31:41.119162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.889 [2024-11-25 23:31:41.119169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.889 [2024-11-25 23:31:41.119177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.889 [2024-11-25 23:31:41.119185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.889 [2024-11-25 23:31:41.119193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.889 [2024-11-25 23:31:41.119201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.889 [2024-11-25 23:31:41.119209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.889 [2024-11-25 23:31:41.119246] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.889 [2024-11-25 23:31:41.119256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.889 [2024-11-25 23:31:41.119274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.889 [2024-11-25 23:31:41.119281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.889 [2024-11-25 23:31:41.119289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.889 [2024-11-25 23:31:41.119296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.119304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.889 [2024-11-25 23:31:41.119311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:30:08.889 [2024-11-25 23:31:41.119319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.152310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.152358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.889 [2024-11-25 23:31:41.152371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.946 ms 00:30:08.889 [2024-11-25 23:31:41.152494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.152588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.152597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.889 [2024-11-25 23:31:41.152606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:08.889 [2024-11-25 23:31:41.152614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.195834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.195892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.889 [2024-11-25 23:31:41.195906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.158 ms 00:30:08.889 [2024-11-25 23:31:41.195915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.195967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.195977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.889 [2024-11-25 23:31:41.195991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.889 [2024-11-25 23:31:41.196000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.196628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.196655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.889 [2024-11-25 23:31:41.196667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:30:08.889 [2024-11-25 23:31:41.196676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.196843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.196863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.889 [2024-11-25 23:31:41.196899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:30:08.889 [2024-11-25 23:31:41.196907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.212901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.213113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.889 [2024-11-25 23:31:41.213140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.973 ms 00:30:08.889 [2024-11-25 23:31:41.213149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.889 [2024-11-25 23:31:41.227622] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:08.889 [2024-11-25 23:31:41.227674] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.889 [2024-11-25 23:31:41.227688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.889 [2024-11-25 23:31:41.227697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.889 [2024-11-25 23:31:41.227707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.423 ms 00:30:08.889 [2024-11-25 23:31:41.227715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.253386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.253440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:09.151 [2024-11-25 23:31:41.253453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.613 ms 00:30:09.151 [2024-11-25 23:31:41.253461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.266741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.266792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:09.151 [2024-11-25 23:31:41.266804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.207 ms 00:30:09.151 [2024-11-25 23:31:41.266813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.279425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.279473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:09.151 [2024-11-25 23:31:41.279486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.562 ms 00:30:09.151 [2024-11-25 23:31:41.279494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.280164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.280194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:09.151 [2024-11-25 23:31:41.280208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:30:09.151 [2024-11-25 23:31:41.280217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.345837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.345907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:09.151 [2024-11-25 23:31:41.345930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.599 ms 00:30:09.151 [2024-11-25 23:31:41.345939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.357053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:09.151 [2024-11-25 23:31:41.360170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.360217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:09.151 [2024-11-25 23:31:41.360229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.164 ms 00:30:09.151 [2024-11-25 23:31:41.360238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.360329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.360340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:09.151 [2024-11-25 23:31:41.360350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:09.151 [2024-11-25 23:31:41.360362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.361275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.361316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:09.151 [2024-11-25 23:31:41.361329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:30:09.151 [2024-11-25 23:31:41.361338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.361370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.361380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:09.151 [2024-11-25 23:31:41.361390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:09.151 [2024-11-25 23:31:41.361399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.361449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:09.151 [2024-11-25 23:31:41.361460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.361470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:09.151 [2024-11-25 23:31:41.361479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:09.151 [2024-11-25 23:31:41.361487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.387751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.387806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:09.151 [2024-11-25 23:31:41.387820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.244 ms 00:30:09.151 [2024-11-25 23:31:41.387834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.387925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.151 [2024-11-25 23:31:41.387936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:09.151 [2024-11-25 23:31:41.387946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:09.151 [2024-11-25 23:31:41.387955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.151 [2024-11-25 23:31:41.389470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.397 ms, result 0 00:30:10.534  [2024-11-25T23:31:43.846Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-25T23:31:44.790Z] Copying: 42/1024 [MB] (17 MBps) [2024-11-25T23:31:45.727Z] Copying: 60/1024 [MB] (18 MBps) [2024-11-25T23:31:46.669Z] Copying: 85/1024 [MB] (24 MBps) [2024-11-25T23:31:47.612Z] Copying: 109/1024 [MB] (24 MBps) [2024-11-25T23:31:49.000Z] Copying: 134/1024 [MB] (24 MBps) [2024-11-25T23:31:49.941Z] Copying: 154/1024 [MB] (19 MBps) [2024-11-25T23:31:50.882Z] Copying: 179/1024 [MB] (24 MBps) [2024-11-25T23:31:51.825Z] Copying: 204/1024 [MB] (25 MBps) [2024-11-25T23:31:52.770Z] Copying: 221/1024 [MB] (16 MBps) [2024-11-25T23:31:53.712Z] Copying: 235/1024 [MB] (14 MBps) [2024-11-25T23:31:54.651Z] Copying: 258/1024 [MB] (22 MBps) [2024-11-25T23:31:55.590Z] Copying: 278/1024 [MB] (19 MBps) [2024-11-25T23:31:57.011Z] Copying: 305/1024 [MB] (27 MBps) [2024-11-25T23:31:57.583Z] Copying: 325/1024 [MB] (19 MBps) [2024-11-25T23:31:58.965Z] Copying: 343/1024 [MB] (17 MBps) [2024-11-25T23:31:59.906Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-25T23:32:00.849Z] Copying: 373/1024 [MB] (18 MBps) [2024-11-25T23:32:01.792Z] Copying: 385/1024 [MB] (11 MBps) [2024-11-25T23:32:02.735Z] Copying: 395/1024 [MB] (10 MBps) [2024-11-25T23:32:03.731Z] Copying: 406/1024 [MB] (10 MBps) [2024-11-25T23:32:04.694Z] Copying: 420/1024 [MB] (13 MBps) [2024-11-25T23:32:05.639Z] Copying: 443/1024 [MB] (22 MBps) [2024-11-25T23:32:06.583Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-25T23:32:07.971Z] Copying: 470/1024 [MB] (16 MBps) [2024-11-25T23:32:08.917Z] Copying: 486/1024 [MB] (15 MBps) [2024-11-25T23:32:09.862Z] Copying: 504/1024 [MB] (17 MBps) [2024-11-25T23:32:10.806Z] Copying: 524/1024 [MB] (20 MBps) [2024-11-25T23:32:11.751Z] Copying: 541/1024 [MB] (17 MBps) [2024-11-25T23:32:12.694Z] Copying: 553/1024 [MB] (12 MBps) [2024-11-25T23:32:13.638Z] Copying: 569/1024 [MB] (15 MBps) [2024-11-25T23:32:14.584Z] Copying: 592/1024 [MB] (23 MBps) [2024-11-25T23:32:15.972Z] Copying: 612/1024 [MB] (19 MBps) [2024-11-25T23:32:16.917Z] Copying: 626/1024 [MB] (14 MBps) [2024-11-25T23:32:17.860Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-25T23:32:18.834Z] Copying: 646/1024 [MB] (10 MBps) [2024-11-25T23:32:19.779Z] Copying: 658/1024 [MB] (11 MBps) [2024-11-25T23:32:20.722Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-25T23:32:21.667Z] Copying: 680/1024 [MB] (11 MBps) [2024-11-25T23:32:22.612Z] Copying: 691/1024 [MB] (11 MBps) [2024-11-25T23:32:24.001Z] Copying: 702/1024 [MB] (11 MBps) [2024-11-25T23:32:24.574Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-25T23:32:25.961Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-25T23:32:26.905Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-25T23:32:27.850Z] Copying: 748/1024 [MB] (11 MBps) [2024-11-25T23:32:28.794Z] Copying: 759/1024 [MB] (11 MBps) [2024-11-25T23:32:29.744Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-25T23:32:30.687Z] Copying: 781/1024 [MB] (11 MBps) [2024-11-25T23:32:31.631Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-25T23:32:32.637Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-25T23:32:33.576Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-25T23:32:34.961Z] Copying: 832/1024 [MB] (17 MBps) [2024-11-25T23:32:35.906Z] Copying: 849/1024 [MB] (17 MBps) [2024-11-25T23:32:36.849Z] Copying: 865/1024 [MB] (15 MBps) [2024-11-25T23:32:37.791Z] Copying: 879/1024 [MB] (14 MBps) [2024-11-25T23:32:38.738Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-25T23:32:39.693Z] Copying: 904/1024 [MB] (13 MBps) [2024-11-25T23:32:40.636Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-25T23:32:41.582Z] Copying: 927/1024 [MB] (12 MBps) [2024-11-25T23:32:42.969Z] Copying: 942/1024 [MB] (15 MBps) [2024-11-25T23:32:43.915Z] Copying: 955/1024 [MB] (13 MBps) [2024-11-25T23:32:44.859Z] Copying: 972/1024 [MB] (17 MBps) [2024-11-25T23:32:45.805Z] Copying: 994/1024 [MB] (21 MBps) [2024-11-25T23:32:46.067Z] Copying: 1015/1024 [MB] (21 MBps) [2024-11-25T23:32:46.067Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-25 23:32:46.018823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.698 [2024-11-25 23:32:46.018890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:13.698 [2024-11-25 23:32:46.018905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:13.698 [2024-11-25 23:32:46.018915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.698 [2024-11-25 23:32:46.018938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:13.698 [2024-11-25 23:32:46.022076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.698 [2024-11-25 23:32:46.022120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:13.698 [2024-11-25 23:32:46.022153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.123 ms 00:31:13.698 [2024-11-25 23:32:46.022162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.698 [2024-11-25 23:32:46.022387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.698 [2024-11-25 23:32:46.022404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:13.698 [2024-11-25 23:32:46.022413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:31:13.698 [2024-11-25 23:32:46.022421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.698 [2024-11-25 23:32:46.025876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.698 [2024-11-25 23:32:46.026034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:13.698 [2024-11-25 23:32:46.026050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:31:13.698 [2024-11-25 23:32:46.026082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.698 [2024-11-25 23:32:46.032800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.698 [2024-11-25 23:32:46.032840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:13.698 [2024-11-25 23:32:46.032853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:31:13.698 [2024-11-25 23:32:46.032861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.698 [2024-11-25 23:32:46.061035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.698 [2024-11-25 23:32:46.061091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:13.698 [2024-11-25 23:32:46.061104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.101 ms 00:31:13.698 [2024-11-25 23:32:46.061112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.076774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.960 [2024-11-25 23:32:46.076978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:13.960 [2024-11-25 23:32:46.077002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.615 ms 00:31:13.960 [2024-11-25 23:32:46.077011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.081745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.960 [2024-11-25 23:32:46.081791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:13.960 [2024-11-25 23:32:46.081802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:31:13.960 [2024-11-25 23:32:46.081810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.108248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.960 [2024-11-25 23:32:46.108292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:13.960 [2024-11-25 23:32:46.108303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.421 ms 00:31:13.960 [2024-11-25 23:32:46.108310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.133642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.960 [2024-11-25 23:32:46.133685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:13.960 [2024-11-25 23:32:46.133696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.285 ms 00:31:13.960 [2024-11-25 23:32:46.133703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.158306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.960 [2024-11-25 23:32:46.158351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:13.960 [2024-11-25 23:32:46.158362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.557 ms 00:31:13.960 [2024-11-25 23:32:46.158369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.182577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.960 [2024-11-25 23:32:46.182620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:13.960 [2024-11-25 23:32:46.182631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.137 ms 00:31:13.960 [2024-11-25 23:32:46.182638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.960 [2024-11-25 23:32:46.182681] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:13.960 [2024-11-25 23:32:46.182704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:13.960 [2024-11-25 23:32:46.182715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:13.960 [2024-11-25 23:32:46.182724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:13.960 [2024-11-25 23:32:46.182856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.182995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:13.961 [2024-11-25 23:32:46.183545] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:13.961 [2024-11-25 23:32:46.183556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c284eb1b-bd51-42bd-a57e-41d00c4f73df 00:31:13.961 [2024-11-25 23:32:46.183564] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:13.961 [2024-11-25 23:32:46.183571] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:13.961 [2024-11-25 23:32:46.183579] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:13.961 [2024-11-25 23:32:46.183587] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:13.961 [2024-11-25 23:32:46.183601] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:13.961 [2024-11-25 23:32:46.183610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:13.961 [2024-11-25 23:32:46.183617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:13.961 [2024-11-25 23:32:46.183623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:13.961 [2024-11-25 23:32:46.183629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:13.962 [2024-11-25 23:32:46.183636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.962 [2024-11-25 23:32:46.183644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:13.962 [2024-11-25 23:32:46.183653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:31:13.962 [2024-11-25 23:32:46.183661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.197363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.962 [2024-11-25 23:32:46.197402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:13.962 [2024-11-25 23:32:46.197414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.662 ms 00:31:13.962 [2024-11-25 23:32:46.197422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.197817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.962 [2024-11-25 23:32:46.197835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:13.962 [2024-11-25 23:32:46.197844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:31:13.962 [2024-11-25 23:32:46.197852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.234294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.962 [2024-11-25 23:32:46.234475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:13.962 [2024-11-25 23:32:46.234496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.962 [2024-11-25 23:32:46.234506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.234575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.962 [2024-11-25 23:32:46.234592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:13.962 [2024-11-25 23:32:46.234602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.962 [2024-11-25 23:32:46.234611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.234676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.962 [2024-11-25 23:32:46.234686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:13.962 [2024-11-25 23:32:46.234694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.962 [2024-11-25 23:32:46.234703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.234719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.962 [2024-11-25 23:32:46.234727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:13.962 [2024-11-25 23:32:46.234739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.962 [2024-11-25 23:32:46.234746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.962 [2024-11-25 23:32:46.319010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.962 [2024-11-25 23:32:46.319093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:13.962 [2024-11-25 23:32:46.319109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.962 [2024-11-25 23:32:46.319118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.387930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.387985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:14.224 [2024-11-25 23:32:46.388005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.388138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:14.224 [2024-11-25 23:32:46.388147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.388204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:14.224 [2024-11-25 23:32:46.388212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.388336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:14.224 [2024-11-25 23:32:46.388345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.388392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:14.224 [2024-11-25 23:32:46.388400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.388464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:14.224 [2024-11-25 23:32:46.388472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.224 [2024-11-25 23:32:46.388538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:14.224 [2024-11-25 23:32:46.388547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.224 [2024-11-25 23:32:46.388555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.224 [2024-11-25 23:32:46.388693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.832 ms, result 0 00:31:14.798 00:31:14.798 00:31:14.798 23:32:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:17.348 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:17.348 Process with pid 81271 is not found 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81271 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81271 ']' 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81271 00:31:17.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81271) - No such process 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81271 is not found' 00:31:17.348 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:17.610 Remove shared memory files 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:17.610 ************************************ 00:31:17.610 END TEST ftl_dirty_shutdown 00:31:17.610 ************************************ 00:31:17.610 00:31:17.610 real 4m8.154s 00:31:17.610 user 4m22.865s 00:31:17.610 sys 0m23.682s 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.610 23:32:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:17.872 23:32:50 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:17.872 23:32:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.872 23:32:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.872 23:32:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:17.872 ************************************ 00:31:17.872 START TEST ftl_upgrade_shutdown 00:31:17.872 ************************************ 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:17.872 * Looking for test storage... 00:31:17.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.872 --rc genhtml_branch_coverage=1 00:31:17.872 --rc genhtml_function_coverage=1 00:31:17.872 --rc genhtml_legend=1 00:31:17.872 --rc geninfo_all_blocks=1 00:31:17.872 --rc geninfo_unexecuted_blocks=1 00:31:17.872 00:31:17.872 ' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.872 --rc genhtml_branch_coverage=1 00:31:17.872 --rc genhtml_function_coverage=1 00:31:17.872 --rc genhtml_legend=1 00:31:17.872 --rc geninfo_all_blocks=1 00:31:17.872 --rc geninfo_unexecuted_blocks=1 00:31:17.872 00:31:17.872 ' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.872 --rc genhtml_branch_coverage=1 00:31:17.872 --rc genhtml_function_coverage=1 00:31:17.872 --rc genhtml_legend=1 00:31:17.872 --rc geninfo_all_blocks=1 00:31:17.872 --rc geninfo_unexecuted_blocks=1 00:31:17.872 00:31:17.872 ' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:17.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.872 --rc genhtml_branch_coverage=1 00:31:17.872 --rc genhtml_function_coverage=1 00:31:17.872 --rc genhtml_legend=1 00:31:17.872 --rc geninfo_all_blocks=1 00:31:17.872 --rc geninfo_unexecuted_blocks=1 00:31:17.872 00:31:17.872 ' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83926 00:31:17.872 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83926 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83926 ']' 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.873 23:32:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:18.133 [2024-11-25 23:32:50.291251] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:31:18.133 [2024-11-25 23:32:50.291638] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83926 ] 00:31:18.133 [2024-11-25 23:32:50.465304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.396 [2024-11-25 23:32:50.589882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:18.969 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:19.231 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:19.493 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:19.493 { 00:31:19.493 "name": "basen1", 00:31:19.493 "aliases": [ 00:31:19.493 "0a4c0d31-5b54-4694-8989-337fe59a7d29" 00:31:19.493 ], 00:31:19.493 "product_name": "NVMe disk", 00:31:19.493 "block_size": 4096, 00:31:19.493 "num_blocks": 1310720, 00:31:19.493 "uuid": "0a4c0d31-5b54-4694-8989-337fe59a7d29", 00:31:19.493 "numa_id": -1, 00:31:19.493 "assigned_rate_limits": { 00:31:19.493 "rw_ios_per_sec": 0, 00:31:19.493 "rw_mbytes_per_sec": 0, 00:31:19.493 "r_mbytes_per_sec": 0, 00:31:19.493 "w_mbytes_per_sec": 0 00:31:19.493 }, 00:31:19.493 "claimed": true, 00:31:19.493 "claim_type": "read_many_write_one", 00:31:19.493 "zoned": false, 00:31:19.493 "supported_io_types": { 00:31:19.493 "read": true, 00:31:19.493 "write": true, 00:31:19.493 "unmap": true, 00:31:19.493 "flush": true, 00:31:19.493 "reset": true, 00:31:19.493 "nvme_admin": true, 00:31:19.493 "nvme_io": true, 00:31:19.493 "nvme_io_md": false, 00:31:19.493 "write_zeroes": true, 00:31:19.493 "zcopy": false, 00:31:19.493 "get_zone_info": false, 00:31:19.493 "zone_management": false, 00:31:19.493 "zone_append": false, 00:31:19.493 "compare": true, 00:31:19.493 "compare_and_write": false, 00:31:19.493 "abort": true, 00:31:19.493 "seek_hole": false, 00:31:19.493 "seek_data": false, 00:31:19.493 "copy": true, 00:31:19.493 "nvme_iov_md": false 00:31:19.493 }, 00:31:19.493 "driver_specific": { 00:31:19.493 "nvme": [ 00:31:19.493 { 00:31:19.493 "pci_address": "0000:00:11.0", 00:31:19.493 "trid": { 00:31:19.493 "trtype": "PCIe", 00:31:19.493 "traddr": "0000:00:11.0" 00:31:19.493 }, 00:31:19.493 "ctrlr_data": { 00:31:19.493 "cntlid": 0, 00:31:19.493 "vendor_id": "0x1b36", 00:31:19.493 "model_number": "QEMU NVMe Ctrl", 00:31:19.493 "serial_number": "12341", 00:31:19.493 "firmware_revision": "8.0.0", 00:31:19.493 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:19.493 "oacs": { 00:31:19.493 "security": 0, 00:31:19.493 "format": 1, 00:31:19.493 "firmware": 0, 00:31:19.493 "ns_manage": 1 00:31:19.493 }, 00:31:19.493 "multi_ctrlr": false, 00:31:19.493 "ana_reporting": false 00:31:19.493 }, 00:31:19.493 "vs": { 00:31:19.493 "nvme_version": "1.4" 00:31:19.493 }, 00:31:19.493 "ns_data": { 00:31:19.493 "id": 1, 00:31:19.493 "can_share": false 00:31:19.493 } 00:31:19.493 } 00:31:19.493 ], 00:31:19.493 "mp_policy": "active_passive" 00:31:19.493 } 00:31:19.493 } 00:31:19.493 ]' 00:31:19.493 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:19.493 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:19.493 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:19.755 23:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:19.755 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=75b31cc0-1161-464c-ae22-eebd6b805c1f 00:31:19.755 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:19.755 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 75b31cc0-1161-464c-ae22-eebd6b805c1f 00:31:20.016 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:20.277 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7382a443-48f9-42a5-a778-e002feb26095 00:31:20.277 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7382a443-48f9-42a5-a778-e002feb26095 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=110066f9-d9eb-45dd-a0e4-03725d7829fe 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 110066f9-d9eb-45dd-a0e4-03725d7829fe ]] 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 110066f9-d9eb-45dd-a0e4-03725d7829fe 5120 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=110066f9-d9eb-45dd-a0e4-03725d7829fe 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 110066f9-d9eb-45dd-a0e4-03725d7829fe 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=110066f9-d9eb-45dd-a0e4-03725d7829fe 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:20.539 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 110066f9-d9eb-45dd-a0e4-03725d7829fe 00:31:20.801 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:20.801 { 00:31:20.801 "name": "110066f9-d9eb-45dd-a0e4-03725d7829fe", 00:31:20.801 "aliases": [ 00:31:20.801 "lvs/basen1p0" 00:31:20.801 ], 00:31:20.801 "product_name": "Logical Volume", 00:31:20.801 "block_size": 4096, 00:31:20.801 "num_blocks": 5242880, 00:31:20.801 "uuid": "110066f9-d9eb-45dd-a0e4-03725d7829fe", 00:31:20.801 "assigned_rate_limits": { 00:31:20.801 "rw_ios_per_sec": 0, 00:31:20.801 "rw_mbytes_per_sec": 0, 00:31:20.801 "r_mbytes_per_sec": 0, 00:31:20.801 "w_mbytes_per_sec": 0 00:31:20.801 }, 00:31:20.801 "claimed": false, 00:31:20.801 "zoned": false, 00:31:20.801 "supported_io_types": { 00:31:20.801 "read": true, 00:31:20.801 "write": true, 00:31:20.801 "unmap": true, 00:31:20.801 "flush": false, 00:31:20.801 "reset": true, 00:31:20.801 "nvme_admin": false, 00:31:20.801 "nvme_io": false, 00:31:20.801 "nvme_io_md": false, 00:31:20.801 "write_zeroes": true, 00:31:20.801 "zcopy": false, 00:31:20.801 "get_zone_info": false, 00:31:20.801 "zone_management": false, 00:31:20.801 "zone_append": false, 00:31:20.801 "compare": false, 00:31:20.801 "compare_and_write": false, 00:31:20.801 "abort": false, 00:31:20.801 "seek_hole": true, 00:31:20.801 "seek_data": true, 00:31:20.801 "copy": false, 00:31:20.801 "nvme_iov_md": false 00:31:20.801 }, 00:31:20.801 "driver_specific": { 00:31:20.801 "lvol": { 00:31:20.801 "lvol_store_uuid": "7382a443-48f9-42a5-a778-e002feb26095", 00:31:20.801 "base_bdev": "basen1", 00:31:20.801 "thin_provision": true, 00:31:20.801 "num_allocated_clusters": 0, 00:31:20.801 "snapshot": false, 00:31:20.801 "clone": false, 00:31:20.801 "esnap_clone": false 00:31:20.801 } 00:31:20.801 } 00:31:20.801 } 00:31:20.801 ]' 00:31:20.801 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:20.801 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:20.801 23:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:20.801 23:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:31:20.801 23:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:31:20.801 23:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:31:20.801 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:20.801 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:20.801 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:21.062 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:21.062 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:21.062 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:21.323 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:21.323 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:21.323 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 110066f9-d9eb-45dd-a0e4-03725d7829fe -c cachen1p0 --l2p_dram_limit 2 00:31:21.586 [2024-11-25 23:32:53.726320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.726383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:21.586 [2024-11-25 23:32:53.726402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:21.586 [2024-11-25 23:32:53.726411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.726480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.726491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:21.586 [2024-11-25 23:32:53.726502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:21.586 [2024-11-25 23:32:53.726510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.726532] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:21.586 [2024-11-25 23:32:53.727308] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:21.586 [2024-11-25 23:32:53.727337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.727346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:21.586 [2024-11-25 23:32:53.727358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.806 ms 00:31:21.586 [2024-11-25 23:32:53.727366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.727410] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a990302b-059b-4be7-a465-efdf8b4c486d 00:31:21.586 [2024-11-25 23:32:53.729147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.729417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:21.586 [2024-11-25 23:32:53.729442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:21.586 [2024-11-25 23:32:53.729453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.737972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.738022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:21.586 [2024-11-25 23:32:53.738033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.372 ms 00:31:21.586 [2024-11-25 23:32:53.738043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.738110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.738122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:21.586 [2024-11-25 23:32:53.738131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:21.586 [2024-11-25 23:32:53.738144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.738199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.738214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:21.586 [2024-11-25 23:32:53.738225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:21.586 [2024-11-25 23:32:53.738238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.738261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:21.586 [2024-11-25 23:32:53.742591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.742628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:21.586 [2024-11-25 23:32:53.742642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.334 ms 00:31:21.586 [2024-11-25 23:32:53.742650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.742683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.742691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:21.586 [2024-11-25 23:32:53.742702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:21.586 [2024-11-25 23:32:53.742710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.742763] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:21.586 [2024-11-25 23:32:53.742907] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:21.586 [2024-11-25 23:32:53.742927] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:21.586 [2024-11-25 23:32:53.742938] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:21.586 [2024-11-25 23:32:53.742952] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:21.586 [2024-11-25 23:32:53.742962] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:21.586 [2024-11-25 23:32:53.742973] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:21.586 [2024-11-25 23:32:53.742980] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:21.586 [2024-11-25 23:32:53.742992] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:21.586 [2024-11-25 23:32:53.743001] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:21.586 [2024-11-25 23:32:53.743011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.743019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:21.586 [2024-11-25 23:32:53.743031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:31:21.586 [2024-11-25 23:32:53.743039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.743142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.586 [2024-11-25 23:32:53.743161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:21.586 [2024-11-25 23:32:53.743172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:31:21.586 [2024-11-25 23:32:53.743180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.586 [2024-11-25 23:32:53.743288] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:21.586 [2024-11-25 23:32:53.743301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:21.586 [2024-11-25 23:32:53.743312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:21.586 [2024-11-25 23:32:53.743320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.586 [2024-11-25 23:32:53.743332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:21.586 [2024-11-25 23:32:53.743339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:21.586 [2024-11-25 23:32:53.743350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:21.586 [2024-11-25 23:32:53.743357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:21.586 [2024-11-25 23:32:53.743366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:21.586 [2024-11-25 23:32:53.743373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.586 [2024-11-25 23:32:53.743384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:21.586 [2024-11-25 23:32:53.743391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:21.586 [2024-11-25 23:32:53.743399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.586 [2024-11-25 23:32:53.743406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:21.586 [2024-11-25 23:32:53.743415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:21.586 [2024-11-25 23:32:53.743422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:21.587 [2024-11-25 23:32:53.743446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:21.587 [2024-11-25 23:32:53.743455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:21.587 [2024-11-25 23:32:53.743471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:21.587 [2024-11-25 23:32:53.743478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:21.587 [2024-11-25 23:32:53.743495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:21.587 [2024-11-25 23:32:53.743504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:21.587 [2024-11-25 23:32:53.743519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:21.587 [2024-11-25 23:32:53.743526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:21.587 [2024-11-25 23:32:53.743541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:21.587 [2024-11-25 23:32:53.743550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:21.587 [2024-11-25 23:32:53.743569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:21.587 [2024-11-25 23:32:53.743575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:21.587 [2024-11-25 23:32:53.743590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:21.587 [2024-11-25 23:32:53.743614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:21.587 [2024-11-25 23:32:53.743637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:21.587 [2024-11-25 23:32:53.743645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:21.587 [2024-11-25 23:32:53.743663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:21.587 [2024-11-25 23:32:53.743671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:21.587 [2024-11-25 23:32:53.743690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:21.587 [2024-11-25 23:32:53.743700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:21.587 [2024-11-25 23:32:53.743708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:21.587 [2024-11-25 23:32:53.743717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:21.587 [2024-11-25 23:32:53.743723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:21.587 [2024-11-25 23:32:53.743732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:21.587 [2024-11-25 23:32:53.743744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:21.587 [2024-11-25 23:32:53.743758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:21.587 [2024-11-25 23:32:53.743777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:21.587 [2024-11-25 23:32:53.743803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:21.587 [2024-11-25 23:32:53.743813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:21.587 [2024-11-25 23:32:53.743820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:21.587 [2024-11-25 23:32:53.743829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:21.587 [2024-11-25 23:32:53.743892] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:21.587 [2024-11-25 23:32:53.743903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:21.587 [2024-11-25 23:32:53.743922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:21.587 [2024-11-25 23:32:53.743930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:21.587 [2024-11-25 23:32:53.743940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:21.587 [2024-11-25 23:32:53.743947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:21.587 [2024-11-25 23:32:53.743957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:21.587 [2024-11-25 23:32:53.743964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.732 ms 00:31:21.587 [2024-11-25 23:32:53.743976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:21.587 [2024-11-25 23:32:53.744016] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:21.587 [2024-11-25 23:32:53.744030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:25.795 [2024-11-25 23:32:57.523094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.795 [2024-11-25 23:32:57.523197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:25.795 [2024-11-25 23:32:57.523219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3779.059 ms 00:31:25.795 [2024-11-25 23:32:57.523232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.795 [2024-11-25 23:32:57.560311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.795 [2024-11-25 23:32:57.560393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:25.795 [2024-11-25 23:32:57.560412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.807 ms 00:31:25.795 [2024-11-25 23:32:57.560425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.795 [2024-11-25 23:32:57.560520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.795 [2024-11-25 23:32:57.560534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:25.795 [2024-11-25 23:32:57.560545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:25.795 [2024-11-25 23:32:57.560564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.795 [2024-11-25 23:32:57.600889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.795 [2024-11-25 23:32:57.600992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:25.795 [2024-11-25 23:32:57.601007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.272 ms 00:31:25.795 [2024-11-25 23:32:57.601019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.795 [2024-11-25 23:32:57.601087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.795 [2024-11-25 23:32:57.601100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:25.795 [2024-11-25 23:32:57.601110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:25.796 [2024-11-25 23:32:57.601121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.601852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.601914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:25.796 [2024-11-25 23:32:57.601937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:31:25.796 [2024-11-25 23:32:57.601949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.602004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.602017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:25.796 [2024-11-25 23:32:57.602030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:31:25.796 [2024-11-25 23:32:57.602045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.622682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.622739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:25.796 [2024-11-25 23:32:57.622752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.599 ms 00:31:25.796 [2024-11-25 23:32:57.622762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.637545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:25.796 [2024-11-25 23:32:57.639441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.639488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:25.796 [2024-11-25 23:32:57.639504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.581 ms 00:31:25.796 [2024-11-25 23:32:57.639513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.682452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.682692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:25.796 [2024-11-25 23:32:57.682726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.896 ms 00:31:25.796 [2024-11-25 23:32:57.682737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.683134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.683169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:25.796 [2024-11-25 23:32:57.683187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.106 ms 00:31:25.796 [2024-11-25 23:32:57.683196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.708794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.708850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:25.796 [2024-11-25 23:32:57.708869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.532 ms 00:31:25.796 [2024-11-25 23:32:57.708879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.734600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.734649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:25.796 [2024-11-25 23:32:57.734665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.637 ms 00:31:25.796 [2024-11-25 23:32:57.734673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.735429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.735459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:25.796 [2024-11-25 23:32:57.735476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.603 ms 00:31:25.796 [2024-11-25 23:32:57.735489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.826303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.826359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:25.796 [2024-11-25 23:32:57.826381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.759 ms 00:31:25.796 [2024-11-25 23:32:57.826391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.855968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.856021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:25.796 [2024-11-25 23:32:57.856039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.473 ms 00:31:25.796 [2024-11-25 23:32:57.856048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.882616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.882679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:25.796 [2024-11-25 23:32:57.882696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.494 ms 00:31:25.796 [2024-11-25 23:32:57.882704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.908967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.909022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:25.796 [2024-11-25 23:32:57.909039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.203 ms 00:31:25.796 [2024-11-25 23:32:57.909046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.909124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.909135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:25.796 [2024-11-25 23:32:57.909152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:25.796 [2024-11-25 23:32:57.909161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.909276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:25.796 [2024-11-25 23:32:57.909292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:25.796 [2024-11-25 23:32:57.909305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:31:25.796 [2024-11-25 23:32:57.909313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:25.796 [2024-11-25 23:32:57.910791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4183.888 ms, result 0 00:31:25.796 { 00:31:25.796 "name": "ftl", 00:31:25.796 "uuid": "a990302b-059b-4be7-a465-efdf8b4c486d" 00:31:25.796 } 00:31:25.796 23:32:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:25.796 [2024-11-25 23:32:58.137645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.796 23:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:26.057 23:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:26.318 [2024-11-25 23:32:58.566132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:26.318 23:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:26.579 [2024-11-25 23:32:58.782875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:26.580 23:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:26.841 Fill FTL, iteration 1 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84048 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84048 /var/tmp/spdk.tgt.sock 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84048 ']' 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:26.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.841 23:32:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:26.841 [2024-11-25 23:32:59.201191] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:31:26.841 [2024-11-25 23:32:59.201455] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84048 ] 00:31:27.101 [2024-11-25 23:32:59.357588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.101 [2024-11-25 23:32:59.455006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.044 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.044 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:28.044 23:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:28.044 ftln1 00:31:28.044 23:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:28.044 23:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84048 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84048 ']' 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84048 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84048 00:31:28.306 killing process with pid 84048 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84048' 00:31:28.306 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84048 00:31:28.307 23:33:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84048 00:31:29.719 23:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:29.719 23:33:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:29.719 [2024-11-25 23:33:02.020196] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:31:29.719 [2024-11-25 23:33:02.020976] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84094 ] 00:31:29.979 [2024-11-25 23:33:02.180990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.979 [2024-11-25 23:33:02.275155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.360  [2024-11-25T23:33:04.664Z] Copying: 229/1024 [MB] (229 MBps) [2024-11-25T23:33:06.037Z] Copying: 491/1024 [MB] (262 MBps) [2024-11-25T23:33:06.973Z] Copying: 749/1024 [MB] (258 MBps) [2024-11-25T23:33:06.973Z] Copying: 1008/1024 [MB] (259 MBps) [2024-11-25T23:33:07.539Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:31:35.170 00:31:35.170 Calculate MD5 checksum, iteration 1 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:35.170 23:33:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:35.170 [2024-11-25 23:33:07.329090] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:31:35.170 [2024-11-25 23:33:07.329371] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84148 ] 00:31:35.170 [2024-11-25 23:33:07.486160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.429 [2024-11-25 23:33:07.562522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.800  [2024-11-25T23:33:09.734Z] Copying: 654/1024 [MB] (654 MBps) [2024-11-25T23:33:09.994Z] Copying: 1024/1024 [MB] (average 653 MBps) 00:31:37.625 00:31:37.625 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:37.625 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:39.540 Fill FTL, iteration 2 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=480bc47511078da130eb04c4bd119a0b 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:39.540 23:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:39.540 [2024-11-25 23:33:11.902708] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:31:39.540 [2024-11-25 23:33:11.902827] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84205 ] 00:31:39.799 [2024-11-25 23:33:12.057991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.799 [2024-11-25 23:33:12.131596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.173  [2024-11-25T23:33:14.478Z] Copying: 262/1024 [MB] (262 MBps) [2024-11-25T23:33:15.853Z] Copying: 517/1024 [MB] (255 MBps) [2024-11-25T23:33:16.789Z] Copying: 743/1024 [MB] (226 MBps) [2024-11-25T23:33:16.789Z] Copying: 942/1024 [MB] (199 MBps) [2024-11-25T23:33:17.356Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:31:44.987 00:31:44.987 Calculate MD5 checksum, iteration 2 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:44.987 23:33:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:45.245 [2024-11-25 23:33:17.368448] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:31:45.245 [2024-11-25 23:33:17.368544] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84258 ] 00:31:45.245 [2024-11-25 23:33:17.516803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.245 [2024-11-25 23:33:17.595600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.144  [2024-11-25T23:33:19.771Z] Copying: 660/1024 [MB] (660 MBps) [2024-11-25T23:33:20.707Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:31:48.338 00:31:48.338 23:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:48.338 23:33:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:50.254 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:50.254 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d62609845388fab3d93be09f8fe11cd3 00:31:50.254 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:50.254 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:50.254 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:50.514 [2024-11-25 23:33:22.656983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.514 [2024-11-25 23:33:22.657031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:50.514 [2024-11-25 23:33:22.657043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:50.514 [2024-11-25 23:33:22.657052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.514 [2024-11-25 23:33:22.657086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.514 [2024-11-25 23:33:22.657095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:50.514 [2024-11-25 23:33:22.657106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:50.514 [2024-11-25 23:33:22.657114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.514 [2024-11-25 23:33:22.657134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.514 [2024-11-25 23:33:22.657142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:50.514 [2024-11-25 23:33:22.657150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:50.514 [2024-11-25 23:33:22.657157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.514 [2024-11-25 23:33:22.657216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.223 ms, result 0 00:31:50.514 true 00:31:50.514 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:50.514 { 00:31:50.514 "name": "ftl", 00:31:50.514 "properties": [ 00:31:50.514 { 00:31:50.514 "name": "superblock_version", 00:31:50.514 "value": 5, 00:31:50.514 "read-only": true 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "name": "base_device", 00:31:50.514 "bands": [ 00:31:50.514 { 00:31:50.514 "id": 0, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 1, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 2, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 3, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 4, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 5, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 6, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 7, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 8, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 9, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 10, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 11, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 12, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 13, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 14, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 15, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 16, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 }, 00:31:50.514 { 00:31:50.514 "id": 17, 00:31:50.514 "state": "FREE", 00:31:50.514 "validity": 0.0 00:31:50.514 } 00:31:50.515 ], 00:31:50.515 "read-only": true 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "name": "cache_device", 00:31:50.515 "type": "bdev", 00:31:50.515 "chunks": [ 00:31:50.515 { 00:31:50.515 "id": 0, 00:31:50.515 "state": "INACTIVE", 00:31:50.515 "utilization": 0.0 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "id": 1, 00:31:50.515 "state": "CLOSED", 00:31:50.515 "utilization": 1.0 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "id": 2, 00:31:50.515 "state": "CLOSED", 00:31:50.515 "utilization": 1.0 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "id": 3, 00:31:50.515 "state": "OPEN", 00:31:50.515 "utilization": 0.001953125 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "id": 4, 00:31:50.515 "state": "OPEN", 00:31:50.515 "utilization": 0.0 00:31:50.515 } 00:31:50.515 ], 00:31:50.515 "read-only": true 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "name": "verbose_mode", 00:31:50.515 "value": true, 00:31:50.515 "unit": "", 00:31:50.515 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:50.515 }, 00:31:50.515 { 00:31:50.515 "name": "prep_upgrade_on_shutdown", 00:31:50.515 "value": false, 00:31:50.515 "unit": "", 00:31:50.515 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:50.515 } 00:31:50.515 ] 00:31:50.515 } 00:31:50.515 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:50.775 [2024-11-25 23:33:22.965315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.775 [2024-11-25 23:33:22.965357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:50.775 [2024-11-25 23:33:22.965369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:50.775 [2024-11-25 23:33:22.965377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.775 [2024-11-25 23:33:22.965397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.775 [2024-11-25 23:33:22.965405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:50.775 [2024-11-25 23:33:22.965413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:50.775 [2024-11-25 23:33:22.965420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.775 [2024-11-25 23:33:22.965438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:50.775 [2024-11-25 23:33:22.965445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:50.775 [2024-11-25 23:33:22.965453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:50.775 [2024-11-25 23:33:22.965460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:50.775 [2024-11-25 23:33:22.965514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:31:50.775 true 00:31:50.775 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:50.775 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:50.775 23:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:51.037 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:51.037 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:51.037 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:51.299 [2024-11-25 23:33:23.478046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.299 [2024-11-25 23:33:23.478121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:51.299 [2024-11-25 23:33:23.478136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:51.299 [2024-11-25 23:33:23.478145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.299 [2024-11-25 23:33:23.478170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.299 [2024-11-25 23:33:23.478180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:51.299 [2024-11-25 23:33:23.478189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:51.299 [2024-11-25 23:33:23.478197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.299 [2024-11-25 23:33:23.478218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:51.299 [2024-11-25 23:33:23.478226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:51.299 [2024-11-25 23:33:23.478235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:51.299 [2024-11-25 23:33:23.478242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:51.299 [2024-11-25 23:33:23.478308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.249 ms, result 0 00:31:51.299 true 00:31:51.299 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:51.560 { 00:31:51.560 "name": "ftl", 00:31:51.560 "properties": [ 00:31:51.560 { 00:31:51.560 "name": "superblock_version", 00:31:51.560 "value": 5, 00:31:51.560 "read-only": true 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "name": "base_device", 00:31:51.560 "bands": [ 00:31:51.560 { 00:31:51.560 "id": 0, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 1, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 2, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 3, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 4, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 5, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 6, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 7, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 8, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 9, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 10, 00:31:51.560 "state": "FREE", 00:31:51.560 "validity": 0.0 00:31:51.560 }, 00:31:51.560 { 00:31:51.560 "id": 11, 00:31:51.560 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 12, 00:31:51.561 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 13, 00:31:51.561 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 14, 00:31:51.561 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 15, 00:31:51.561 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 16, 00:31:51.561 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 17, 00:31:51.561 "state": "FREE", 00:31:51.561 "validity": 0.0 00:31:51.561 } 00:31:51.561 ], 00:31:51.561 "read-only": true 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "name": "cache_device", 00:31:51.561 "type": "bdev", 00:31:51.561 "chunks": [ 00:31:51.561 { 00:31:51.561 "id": 0, 00:31:51.561 "state": "INACTIVE", 00:31:51.561 "utilization": 0.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 1, 00:31:51.561 "state": "CLOSED", 00:31:51.561 "utilization": 1.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 2, 00:31:51.561 "state": "CLOSED", 00:31:51.561 "utilization": 1.0 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 3, 00:31:51.561 "state": "OPEN", 00:31:51.561 "utilization": 0.001953125 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "id": 4, 00:31:51.561 "state": "OPEN", 00:31:51.561 "utilization": 0.0 00:31:51.561 } 00:31:51.561 ], 00:31:51.561 "read-only": true 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "name": "verbose_mode", 00:31:51.561 "value": true, 00:31:51.561 "unit": "", 00:31:51.561 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:51.561 }, 00:31:51.561 { 00:31:51.561 "name": "prep_upgrade_on_shutdown", 00:31:51.561 "value": true, 00:31:51.561 "unit": "", 00:31:51.561 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:51.561 } 00:31:51.561 ] 00:31:51.561 } 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83926 ]] 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83926 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83926 ']' 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83926 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83926 00:31:51.561 killing process with pid 83926 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83926' 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83926 00:31:51.561 23:33:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83926 00:31:52.132 [2024-11-25 23:33:24.447650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:52.132 [2024-11-25 23:33:24.457348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.132 [2024-11-25 23:33:24.457382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:52.132 [2024-11-25 23:33:24.457393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:52.132 [2024-11-25 23:33:24.457399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.132 [2024-11-25 23:33:24.457416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:52.132 [2024-11-25 23:33:24.459438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.132 [2024-11-25 23:33:24.459463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:52.132 [2024-11-25 23:33:24.459471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.012 ms 00:31:52.132 [2024-11-25 23:33:24.459478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.200748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.200798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:02.128 [2024-11-25 23:33:33.200814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8741.216 ms 00:32:02.128 [2024-11-25 23:33:33.200821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.201778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.201796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:02.128 [2024-11-25 23:33:33.201803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.945 ms 00:32:02.128 [2024-11-25 23:33:33.201809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.202666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.202687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:02.128 [2024-11-25 23:33:33.202696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:32:02.128 [2024-11-25 23:33:33.202705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.210221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.210248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:02.128 [2024-11-25 23:33:33.210257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.479 ms 00:32:02.128 [2024-11-25 23:33:33.210264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.215090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.215220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:02.128 [2024-11-25 23:33:33.215233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.801 ms 00:32:02.128 [2024-11-25 23:33:33.215239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.215292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.215299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:02.128 [2024-11-25 23:33:33.215310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:02.128 [2024-11-25 23:33:33.215316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.222660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.222756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:02.128 [2024-11-25 23:33:33.222767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.333 ms 00:32:02.128 [2024-11-25 23:33:33.222772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.230090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.230176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:02.128 [2024-11-25 23:33:33.230224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.295 ms 00:32:02.128 [2024-11-25 23:33:33.230242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.237288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.237377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:02.128 [2024-11-25 23:33:33.237421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.016 ms 00:32:02.128 [2024-11-25 23:33:33.237438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.244583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.244670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:02.128 [2024-11-25 23:33:33.244713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.095 ms 00:32:02.128 [2024-11-25 23:33:33.244729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.244758] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:02.128 [2024-11-25 23:33:33.244786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:02.128 [2024-11-25 23:33:33.244810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:02.128 [2024-11-25 23:33:33.244832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:02.128 [2024-11-25 23:33:33.244882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.244904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.244934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.244956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.244978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:02.128 [2024-11-25 23:33:33.245274] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:02.128 [2024-11-25 23:33:33.245318] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a990302b-059b-4be7-a465-efdf8b4c486d 00:32:02.128 [2024-11-25 23:33:33.245343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:02.128 [2024-11-25 23:33:33.245358] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:32:02.128 [2024-11-25 23:33:33.245372] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:32:02.128 [2024-11-25 23:33:33.245386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:32:02.128 [2024-11-25 23:33:33.245404] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:02.128 [2024-11-25 23:33:33.245443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:02.128 [2024-11-25 23:33:33.245462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:02.128 [2024-11-25 23:33:33.245475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:02.128 [2024-11-25 23:33:33.245489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:02.128 [2024-11-25 23:33:33.245503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.245517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:02.128 [2024-11-25 23:33:33.245532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:32:02.128 [2024-11-25 23:33:33.245545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.255266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.255349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:02.128 [2024-11-25 23:33:33.255394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.638 ms 00:32:02.128 [2024-11-25 23:33:33.255411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.255683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.128 [2024-11-25 23:33:33.255726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:02.128 [2024-11-25 23:33:33.255759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:32:02.128 [2024-11-25 23:33:33.255776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.288425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.128 [2024-11-25 23:33:33.288521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:02.128 [2024-11-25 23:33:33.288566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.128 [2024-11-25 23:33:33.288583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.288616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.128 [2024-11-25 23:33:33.288847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:02.128 [2024-11-25 23:33:33.288938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.128 [2024-11-25 23:33:33.288962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.128 [2024-11-25 23:33:33.289028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.128 [2024-11-25 23:33:33.289047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:02.129 [2024-11-25 23:33:33.289122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.289171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.289192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.289208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:02.129 [2024-11-25 23:33:33.289223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.289237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.347999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.348123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:02.129 [2024-11-25 23:33:33.348168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.348185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.396084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.396188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:02.129 [2024-11-25 23:33:33.396228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.396245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.396315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.396380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:02.129 [2024-11-25 23:33:33.396422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.396439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.396488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.396505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:02.129 [2024-11-25 23:33:33.396520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.396534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.396612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.396673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:02.129 [2024-11-25 23:33:33.396771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.396789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.396830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.396848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:02.129 [2024-11-25 23:33:33.396863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.396877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.396923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.396940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:02.129 [2024-11-25 23:33:33.396955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.396969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.397014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:02.129 [2024-11-25 23:33:33.397032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:02.129 [2024-11-25 23:33:33.397047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:02.129 [2024-11-25 23:33:33.397387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-11-25 23:33:33.397535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8940.129 ms, result 0 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84473 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84473 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84473 ']' 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.698 23:33:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:02.698 [2024-11-25 23:33:35.018401] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:02.698 [2024-11-25 23:33:35.018722] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84473 ] 00:32:02.957 [2024-11-25 23:33:35.172238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.957 [2024-11-25 23:33:35.251531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.524 [2024-11-25 23:33:35.811564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:03.524 [2024-11-25 23:33:35.811615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:03.783 [2024-11-25 23:33:35.954408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.954529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:03.783 [2024-11-25 23:33:35.954544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:03.783 [2024-11-25 23:33:35.954551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.954594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.954602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:03.783 [2024-11-25 23:33:35.954608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:32:03.783 [2024-11-25 23:33:35.954614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.954633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:03.783 [2024-11-25 23:33:35.955191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:03.783 [2024-11-25 23:33:35.955204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.955211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:03.783 [2024-11-25 23:33:35.955217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.577 ms 00:32:03.783 [2024-11-25 23:33:35.955222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.956255] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:03.783 [2024-11-25 23:33:35.965606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.965710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:03.783 [2024-11-25 23:33:35.965723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.352 ms 00:32:03.783 [2024-11-25 23:33:35.965729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.965769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.965777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:03.783 [2024-11-25 23:33:35.965783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:03.783 [2024-11-25 23:33:35.965788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.970011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.970034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:03.783 [2024-11-25 23:33:35.970042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.174 ms 00:32:03.783 [2024-11-25 23:33:35.970047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.970096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.970104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:03.783 [2024-11-25 23:33:35.970110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:03.783 [2024-11-25 23:33:35.970116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.970148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.970156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:03.783 [2024-11-25 23:33:35.970163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:03.783 [2024-11-25 23:33:35.970168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.970183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:03.783 [2024-11-25 23:33:35.972826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.972933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:03.783 [2024-11-25 23:33:35.972948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.646 ms 00:32:03.783 [2024-11-25 23:33:35.972954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.972976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.783 [2024-11-25 23:33:35.972983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:03.783 [2024-11-25 23:33:35.972989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:03.783 [2024-11-25 23:33:35.972995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.783 [2024-11-25 23:33:35.973009] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:03.783 [2024-11-25 23:33:35.973025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:03.783 [2024-11-25 23:33:35.973051] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:03.784 [2024-11-25 23:33:35.973071] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:03.784 [2024-11-25 23:33:35.973149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:03.784 [2024-11-25 23:33:35.973157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:03.784 [2024-11-25 23:33:35.973165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:03.784 [2024-11-25 23:33:35.973172] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973180] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973186] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:03.784 [2024-11-25 23:33:35.973192] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:03.784 [2024-11-25 23:33:35.973197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:03.784 [2024-11-25 23:33:35.973202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:03.784 [2024-11-25 23:33:35.973207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.784 [2024-11-25 23:33:35.973213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:03.784 [2024-11-25 23:33:35.973218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:32:03.784 [2024-11-25 23:33:35.973224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.784 [2024-11-25 23:33:35.973288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.784 [2024-11-25 23:33:35.973294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:03.784 [2024-11-25 23:33:35.973302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:03.784 [2024-11-25 23:33:35.973307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.784 [2024-11-25 23:33:35.973382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:03.784 [2024-11-25 23:33:35.973389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:03.784 [2024-11-25 23:33:35.973396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:03.784 [2024-11-25 23:33:35.973413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:03.784 [2024-11-25 23:33:35.973423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:03.784 [2024-11-25 23:33:35.973429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:03.784 [2024-11-25 23:33:35.973434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:03.784 [2024-11-25 23:33:35.973445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:03.784 [2024-11-25 23:33:35.973450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:03.784 [2024-11-25 23:33:35.973461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:03.784 [2024-11-25 23:33:35.973466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:03.784 [2024-11-25 23:33:35.973476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:03.784 [2024-11-25 23:33:35.973481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:03.784 [2024-11-25 23:33:35.973491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:03.784 [2024-11-25 23:33:35.973495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:03.784 [2024-11-25 23:33:35.973510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:03.784 [2024-11-25 23:33:35.973515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:03.784 [2024-11-25 23:33:35.973524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:03.784 [2024-11-25 23:33:35.973529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:03.784 [2024-11-25 23:33:35.973538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:03.784 [2024-11-25 23:33:35.973543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:03.784 [2024-11-25 23:33:35.973553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:03.784 [2024-11-25 23:33:35.973557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:03.784 [2024-11-25 23:33:35.973567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:03.784 [2024-11-25 23:33:35.973581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:03.784 [2024-11-25 23:33:35.973595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:03.784 [2024-11-25 23:33:35.973600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973606] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:03.784 [2024-11-25 23:33:35.973613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:03.784 [2024-11-25 23:33:35.973618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:03.784 [2024-11-25 23:33:35.973631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:03.784 [2024-11-25 23:33:35.973636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:03.784 [2024-11-25 23:33:35.973641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:03.784 [2024-11-25 23:33:35.973646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:03.784 [2024-11-25 23:33:35.973651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:03.784 [2024-11-25 23:33:35.973656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:03.784 [2024-11-25 23:33:35.973661] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:03.784 [2024-11-25 23:33:35.973668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:03.784 [2024-11-25 23:33:35.973680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:03.784 [2024-11-25 23:33:35.973696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:03.784 [2024-11-25 23:33:35.973701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:03.784 [2024-11-25 23:33:35.973706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:03.784 [2024-11-25 23:33:35.973711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:03.784 [2024-11-25 23:33:35.973749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:03.784 [2024-11-25 23:33:35.973754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:03.784 [2024-11-25 23:33:35.973766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:03.784 [2024-11-25 23:33:35.973771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:03.784 [2024-11-25 23:33:35.973776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:03.784 [2024-11-25 23:33:35.973782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:03.784 [2024-11-25 23:33:35.973788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:03.784 [2024-11-25 23:33:35.973793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:32:03.784 [2024-11-25 23:33:35.973798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:03.784 [2024-11-25 23:33:35.973831] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:03.785 [2024-11-25 23:33:35.973840] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:07.159 [2024-11-25 23:33:39.411498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.411564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:07.159 [2024-11-25 23:33:39.411580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3437.651 ms 00:32:07.159 [2024-11-25 23:33:39.411589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.438088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.438246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:07.159 [2024-11-25 23:33:39.438265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.281 ms 00:32:07.159 [2024-11-25 23:33:39.438274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.438364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.438375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:07.159 [2024-11-25 23:33:39.438384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:07.159 [2024-11-25 23:33:39.438393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.469637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.469783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:07.159 [2024-11-25 23:33:39.469807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.205 ms 00:32:07.159 [2024-11-25 23:33:39.469816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.469847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.469855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:07.159 [2024-11-25 23:33:39.469864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:07.159 [2024-11-25 23:33:39.469871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.470329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.470348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:07.159 [2024-11-25 23:33:39.470358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:32:07.159 [2024-11-25 23:33:39.470371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.470412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.470420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:07.159 [2024-11-25 23:33:39.470429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:07.159 [2024-11-25 23:33:39.470438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.486093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.486129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:07.159 [2024-11-25 23:33:39.486140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.632 ms 00:32:07.159 [2024-11-25 23:33:39.486147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.499266] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:07.159 [2024-11-25 23:33:39.499310] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:07.159 [2024-11-25 23:33:39.499322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.499331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:07.159 [2024-11-25 23:33:39.499340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.067 ms 00:32:07.159 [2024-11-25 23:33:39.499347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.159 [2024-11-25 23:33:39.513461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.159 [2024-11-25 23:33:39.513506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:07.159 [2024-11-25 23:33:39.513518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.060 ms 00:32:07.159 [2024-11-25 23:33:39.513526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.420 [2024-11-25 23:33:39.525549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.420 [2024-11-25 23:33:39.525594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:07.420 [2024-11-25 23:33:39.525605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.970 ms 00:32:07.420 [2024-11-25 23:33:39.525614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.420 [2024-11-25 23:33:39.537940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.420 [2024-11-25 23:33:39.537986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:07.420 [2024-11-25 23:33:39.537999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.277 ms 00:32:07.420 [2024-11-25 23:33:39.538006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.420 [2024-11-25 23:33:39.538689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.420 [2024-11-25 23:33:39.538718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:07.420 [2024-11-25 23:33:39.538730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:32:07.420 [2024-11-25 23:33:39.538738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.420 [2024-11-25 23:33:39.617265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.420 [2024-11-25 23:33:39.617343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:07.420 [2024-11-25 23:33:39.617361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.504 ms 00:32:07.420 [2024-11-25 23:33:39.617371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.420 [2024-11-25 23:33:39.628919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:07.420 [2024-11-25 23:33:39.630196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.630396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:07.421 [2024-11-25 23:33:39.630420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.752 ms 00:32:07.421 [2024-11-25 23:33:39.630429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.630529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.630544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:07.421 [2024-11-25 23:33:39.630555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:07.421 [2024-11-25 23:33:39.630563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.630624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.630635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:07.421 [2024-11-25 23:33:39.630645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:07.421 [2024-11-25 23:33:39.630653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.630676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.630686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:07.421 [2024-11-25 23:33:39.630698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:07.421 [2024-11-25 23:33:39.630706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.630744] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:07.421 [2024-11-25 23:33:39.630756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.630764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:07.421 [2024-11-25 23:33:39.630773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:07.421 [2024-11-25 23:33:39.630782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.656093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.656152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:07.421 [2024-11-25 23:33:39.656165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.289 ms 00:32:07.421 [2024-11-25 23:33:39.656174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.656270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.421 [2024-11-25 23:33:39.656281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:07.421 [2024-11-25 23:33:39.656292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:32:07.421 [2024-11-25 23:33:39.656300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.421 [2024-11-25 23:33:39.657596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3702.632 ms, result 0 00:32:07.421 [2024-11-25 23:33:39.672521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.421 [2024-11-25 23:33:39.688529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:07.421 [2024-11-25 23:33:39.696711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:07.421 23:33:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.421 23:33:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:07.421 23:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:07.421 23:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:07.421 23:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:07.683 [2024-11-25 23:33:39.932751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.683 [2024-11-25 23:33:39.932812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:07.683 [2024-11-25 23:33:39.932826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:07.683 [2024-11-25 23:33:39.932839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.683 [2024-11-25 23:33:39.932866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.683 [2024-11-25 23:33:39.932875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:07.683 [2024-11-25 23:33:39.932885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:07.683 [2024-11-25 23:33:39.932893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.683 [2024-11-25 23:33:39.932930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:07.683 [2024-11-25 23:33:39.932940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:07.683 [2024-11-25 23:33:39.932949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:07.683 [2024-11-25 23:33:39.932957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:07.683 [2024-11-25 23:33:39.933022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.267 ms, result 0 00:32:07.683 true 00:32:07.683 23:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:07.945 { 00:32:07.945 "name": "ftl", 00:32:07.945 "properties": [ 00:32:07.945 { 00:32:07.945 "name": "superblock_version", 00:32:07.945 "value": 5, 00:32:07.945 "read-only": true 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "name": "base_device", 00:32:07.945 "bands": [ 00:32:07.945 { 00:32:07.945 "id": 0, 00:32:07.945 "state": "CLOSED", 00:32:07.945 "validity": 1.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 1, 00:32:07.945 "state": "CLOSED", 00:32:07.945 "validity": 1.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 2, 00:32:07.945 "state": "CLOSED", 00:32:07.945 "validity": 0.007843137254901933 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 3, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 4, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 5, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 6, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 7, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 8, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 9, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 10, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 11, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 12, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 13, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 14, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 15, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 16, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 17, 00:32:07.945 "state": "FREE", 00:32:07.945 "validity": 0.0 00:32:07.945 } 00:32:07.945 ], 00:32:07.945 "read-only": true 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "name": "cache_device", 00:32:07.945 "type": "bdev", 00:32:07.945 "chunks": [ 00:32:07.945 { 00:32:07.945 "id": 0, 00:32:07.945 "state": "INACTIVE", 00:32:07.945 "utilization": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 1, 00:32:07.945 "state": "OPEN", 00:32:07.945 "utilization": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 2, 00:32:07.945 "state": "OPEN", 00:32:07.945 "utilization": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 3, 00:32:07.945 "state": "FREE", 00:32:07.945 "utilization": 0.0 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "id": 4, 00:32:07.945 "state": "FREE", 00:32:07.945 "utilization": 0.0 00:32:07.945 } 00:32:07.945 ], 00:32:07.945 "read-only": true 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "name": "verbose_mode", 00:32:07.945 "value": true, 00:32:07.945 "unit": "", 00:32:07.945 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:07.945 }, 00:32:07.945 { 00:32:07.945 "name": "prep_upgrade_on_shutdown", 00:32:07.945 "value": false, 00:32:07.945 "unit": "", 00:32:07.945 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:07.945 } 00:32:07.945 ] 00:32:07.945 } 00:32:07.945 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:07.945 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:07.945 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:08.207 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:08.207 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:08.207 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:08.207 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:08.207 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:08.470 Validate MD5 checksum, iteration 1 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:08.470 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:08.470 [2024-11-25 23:33:40.699975] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:08.470 [2024-11-25 23:33:40.700356] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84554 ] 00:32:08.732 [2024-11-25 23:33:40.863890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.732 [2024-11-25 23:33:40.986520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.652  [2024-11-25T23:33:43.591Z] Copying: 494/1024 [MB] (494 MBps) [2024-11-25T23:33:45.064Z] Copying: 1024/1024 [MB] (average 523 MBps) 00:32:12.695 00:32:12.695 23:33:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:12.695 23:33:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:15.241 Validate MD5 checksum, iteration 2 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=480bc47511078da130eb04c4bd119a0b 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 480bc47511078da130eb04c4bd119a0b != \4\8\0\b\c\4\7\5\1\1\0\7\8\d\a\1\3\0\e\b\0\4\c\4\b\d\1\1\9\a\0\b ]] 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:15.241 23:33:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:15.241 [2024-11-25 23:33:47.123314] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:15.241 [2024-11-25 23:33:47.123429] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84621 ] 00:32:15.241 [2024-11-25 23:33:47.283551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.241 [2024-11-25 23:33:47.377315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.629  [2024-11-25T23:33:49.939Z] Copying: 487/1024 [MB] (487 MBps) [2024-11-25T23:33:49.939Z] Copying: 1016/1024 [MB] (529 MBps) [2024-11-25T23:33:50.882Z] Copying: 1024/1024 [MB] (average 508 MBps) 00:32:18.513 00:32:18.513 23:33:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:18.513 23:33:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d62609845388fab3d93be09f8fe11cd3 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d62609845388fab3d93be09f8fe11cd3 != \d\6\2\6\0\9\8\4\5\3\8\8\f\a\b\3\d\9\3\b\e\0\9\f\8\f\e\1\1\c\d\3 ]] 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84473 ]] 00:32:21.060 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84473 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84688 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84688 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84688 ']' 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.061 23:33:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:21.061 [2024-11-25 23:33:53.011505] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:21.061 [2024-11-25 23:33:53.011812] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84688 ] 00:32:21.061 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84473 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:21.061 [2024-11-25 23:33:53.170612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.061 [2024-11-25 23:33:53.276202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.633 [2024-11-25 23:33:53.900655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:21.633 [2024-11-25 23:33:53.900713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:21.897 [2024-11-25 23:33:54.049364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.049396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:21.897 [2024-11-25 23:33:54.049408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:21.897 [2024-11-25 23:33:54.049415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.049455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.049463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:21.897 [2024-11-25 23:33:54.049470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:21.897 [2024-11-25 23:33:54.049476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.049494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:21.897 [2024-11-25 23:33:54.050461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:21.897 [2024-11-25 23:33:54.050489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.050496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:21.897 [2024-11-25 23:33:54.050504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.001 ms 00:32:21.897 [2024-11-25 23:33:54.050511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.050747] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:21.897 [2024-11-25 23:33:54.064484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.064639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:21.897 [2024-11-25 23:33:54.064654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.736 ms 00:32:21.897 [2024-11-25 23:33:54.064661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.071664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.071759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:21.897 [2024-11-25 23:33:54.071771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:21.897 [2024-11-25 23:33:54.071778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.072030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.072040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:21.897 [2024-11-25 23:33:54.072047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:32:21.897 [2024-11-25 23:33:54.072053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.072108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.072116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:21.897 [2024-11-25 23:33:54.072123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:21.897 [2024-11-25 23:33:54.072128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.072150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.072157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:21.897 [2024-11-25 23:33:54.072163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:21.897 [2024-11-25 23:33:54.072169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.072186] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:21.897 [2024-11-25 23:33:54.074600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.074621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:21.897 [2024-11-25 23:33:54.074629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.418 ms 00:32:21.897 [2024-11-25 23:33:54.074637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.074660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.074667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:21.897 [2024-11-25 23:33:54.074673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:21.897 [2024-11-25 23:33:54.074679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.074694] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:21.897 [2024-11-25 23:33:54.074710] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:21.897 [2024-11-25 23:33:54.074737] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:21.897 [2024-11-25 23:33:54.074751] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:21.897 [2024-11-25 23:33:54.074832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:21.897 [2024-11-25 23:33:54.074841] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:21.897 [2024-11-25 23:33:54.074849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:21.897 [2024-11-25 23:33:54.074857] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:21.897 [2024-11-25 23:33:54.074864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:21.897 [2024-11-25 23:33:54.074870] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:21.897 [2024-11-25 23:33:54.074875] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:21.897 [2024-11-25 23:33:54.074881] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:21.897 [2024-11-25 23:33:54.074887] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:21.897 [2024-11-25 23:33:54.074895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.074901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:21.897 [2024-11-25 23:33:54.074907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:32:21.897 [2024-11-25 23:33:54.074913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.074978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.897 [2024-11-25 23:33:54.074984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:21.897 [2024-11-25 23:33:54.074990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:21.897 [2024-11-25 23:33:54.074996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.897 [2024-11-25 23:33:54.075183] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:21.897 [2024-11-25 23:33:54.075214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:21.897 [2024-11-25 23:33:54.075231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:21.897 [2024-11-25 23:33:54.075246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.897 [2024-11-25 23:33:54.075262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:21.897 [2024-11-25 23:33:54.075276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:21.897 [2024-11-25 23:33:54.075290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:21.897 [2024-11-25 23:33:54.075304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:21.897 [2024-11-25 23:33:54.075319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:21.897 [2024-11-25 23:33:54.075333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.897 [2024-11-25 23:33:54.075347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:21.897 [2024-11-25 23:33:54.075361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:21.897 [2024-11-25 23:33:54.075375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.897 [2024-11-25 23:33:54.075388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:21.897 [2024-11-25 23:33:54.075402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:21.897 [2024-11-25 23:33:54.075459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.897 [2024-11-25 23:33:54.075477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:21.897 [2024-11-25 23:33:54.075493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:21.898 [2024-11-25 23:33:54.075507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:21.898 [2024-11-25 23:33:54.075535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:21.898 [2024-11-25 23:33:54.075555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:21.898 [2024-11-25 23:33:54.075582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:21.898 [2024-11-25 23:33:54.075595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:21.898 [2024-11-25 23:33:54.075657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:21.898 [2024-11-25 23:33:54.075675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:21.898 [2024-11-25 23:33:54.075702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:21.898 [2024-11-25 23:33:54.075715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:21.898 [2024-11-25 23:33:54.075743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:21.898 [2024-11-25 23:33:54.075756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:21.898 [2024-11-25 23:33:54.075784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:21.898 [2024-11-25 23:33:54.075845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:21.898 [2024-11-25 23:33:54.075861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:21.898 [2024-11-25 23:33:54.075866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075872] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:21.898 [2024-11-25 23:33:54.075878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:21.898 [2024-11-25 23:33:54.075884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.898 [2024-11-25 23:33:54.075895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:21.898 [2024-11-25 23:33:54.075900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:21.898 [2024-11-25 23:33:54.075906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:21.898 [2024-11-25 23:33:54.075912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:21.898 [2024-11-25 23:33:54.075917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:21.898 [2024-11-25 23:33:54.075923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:21.898 [2024-11-25 23:33:54.075930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:21.898 [2024-11-25 23:33:54.075938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.075946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:21.898 [2024-11-25 23:33:54.075952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.075957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.075963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:21.898 [2024-11-25 23:33:54.075969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:21.898 [2024-11-25 23:33:54.075974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:21.898 [2024-11-25 23:33:54.075980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:21.898 [2024-11-25 23:33:54.075985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.075991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.075997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.076002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.076008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.076014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.076020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:21.898 [2024-11-25 23:33:54.076026] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:21.898 [2024-11-25 23:33:54.076032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.076042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:21.898 [2024-11-25 23:33:54.076048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:21.898 [2024-11-25 23:33:54.076053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:21.898 [2024-11-25 23:33:54.076069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:21.898 [2024-11-25 23:33:54.076075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.076081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:21.898 [2024-11-25 23:33:54.076088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.056 ms 00:32:21.898 [2024-11-25 23:33:54.076093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.097368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.097394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:21.898 [2024-11-25 23:33:54.097402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.218 ms 00:32:21.898 [2024-11-25 23:33:54.097409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.097440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.097447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:21.898 [2024-11-25 23:33:54.097454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:21.898 [2024-11-25 23:33:54.097460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.123914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.123940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:21.898 [2024-11-25 23:33:54.123949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.414 ms 00:32:21.898 [2024-11-25 23:33:54.123955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.123977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.123983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:21.898 [2024-11-25 23:33:54.123990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:21.898 [2024-11-25 23:33:54.123999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.124086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.124095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:21.898 [2024-11-25 23:33:54.124102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:32:21.898 [2024-11-25 23:33:54.124107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.124142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.124148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:21.898 [2024-11-25 23:33:54.124155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:21.898 [2024-11-25 23:33:54.124161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.137378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.137511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:21.898 [2024-11-25 23:33:54.137523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.196 ms 00:32:21.898 [2024-11-25 23:33:54.137532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.137612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.137621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:21.898 [2024-11-25 23:33:54.137628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:21.898 [2024-11-25 23:33:54.137633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.165660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.165773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:21.898 [2024-11-25 23:33:54.165788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.011 ms 00:32:21.898 [2024-11-25 23:33:54.165795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.173129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.898 [2024-11-25 23:33:54.173158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:21.898 [2024-11-25 23:33:54.173166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:32:21.898 [2024-11-25 23:33:54.173172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.898 [2024-11-25 23:33:54.219475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.899 [2024-11-25 23:33:54.219509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:21.899 [2024-11-25 23:33:54.219518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.261 ms 00:32:21.899 [2024-11-25 23:33:54.219525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.899 [2024-11-25 23:33:54.219653] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:21.899 [2024-11-25 23:33:54.219755] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:21.899 [2024-11-25 23:33:54.219851] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:21.899 [2024-11-25 23:33:54.219946] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:21.899 [2024-11-25 23:33:54.219954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.899 [2024-11-25 23:33:54.219960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:21.899 [2024-11-25 23:33:54.219967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:32:21.899 [2024-11-25 23:33:54.219973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.899 [2024-11-25 23:33:54.220016] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:21.899 [2024-11-25 23:33:54.220025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.899 [2024-11-25 23:33:54.220035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:21.899 [2024-11-25 23:33:54.220042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:32:21.899 [2024-11-25 23:33:54.220048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.899 [2024-11-25 23:33:54.232621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.899 [2024-11-25 23:33:54.232649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:21.899 [2024-11-25 23:33:54.232658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.543 ms 00:32:21.899 [2024-11-25 23:33:54.232665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.899 [2024-11-25 23:33:54.239091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.899 [2024-11-25 23:33:54.239199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:21.899 [2024-11-25 23:33:54.239213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:21.899 [2024-11-25 23:33:54.239220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.899 [2024-11-25 23:33:54.239288] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:21.899 [2024-11-25 23:33:54.239448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.899 [2024-11-25 23:33:54.239459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:21.899 [2024-11-25 23:33:54.239466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.161 ms 00:32:21.899 [2024-11-25 23:33:54.239472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.844 [2024-11-25 23:33:55.105542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.844 [2024-11-25 23:33:55.105581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:22.844 [2024-11-25 23:33:55.105592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 865.403 ms 00:32:22.844 [2024-11-25 23:33:55.105600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.844 [2024-11-25 23:33:55.109261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.844 [2024-11-25 23:33:55.109377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:22.844 [2024-11-25 23:33:55.109390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.408 ms 00:32:22.844 [2024-11-25 23:33:55.109402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.844 [2024-11-25 23:33:55.109941] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:22.844 [2024-11-25 23:33:55.109965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.844 [2024-11-25 23:33:55.109973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:22.844 [2024-11-25 23:33:55.109980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:32:22.844 [2024-11-25 23:33:55.109987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.844 [2024-11-25 23:33:55.110012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.844 [2024-11-25 23:33:55.110020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:22.845 [2024-11-25 23:33:55.110026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:22.845 [2024-11-25 23:33:55.110037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.845 [2024-11-25 23:33:55.110075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 870.772 ms, result 0 00:32:22.845 [2024-11-25 23:33:55.110104] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:22.845 [2024-11-25 23:33:55.110254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.845 [2024-11-25 23:33:55.110262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:22.845 [2024-11-25 23:33:55.110268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.151 ms 00:32:22.845 [2024-11-25 23:33:55.110274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.001554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.001585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:23.791 [2024-11-25 23:33:56.001601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 890.552 ms 00:32:23.791 [2024-11-25 23:33:56.001607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.005161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.005186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:23.791 [2024-11-25 23:33:56.005193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.419 ms 00:32:23.791 [2024-11-25 23:33:56.005199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.006002] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:23.791 [2024-11-25 23:33:56.006027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.006034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:23.791 [2024-11-25 23:33:56.006041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.808 ms 00:32:23.791 [2024-11-25 23:33:56.006046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.006105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.006114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:23.791 [2024-11-25 23:33:56.006120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:23.791 [2024-11-25 23:33:56.006126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.006152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 896.043 ms, result 0 00:32:23.791 [2024-11-25 23:33:56.006184] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:23.791 [2024-11-25 23:33:56.006192] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:23.791 [2024-11-25 23:33:56.006200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.006206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:23.791 [2024-11-25 23:33:56.006212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1766.921 ms 00:32:23.791 [2024-11-25 23:33:56.006218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.006241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.006251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:23.791 [2024-11-25 23:33:56.006257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:23.791 [2024-11-25 23:33:56.006263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.014576] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:23.791 [2024-11-25 23:33:56.014660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.014668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:23.791 [2024-11-25 23:33:56.014675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.383 ms 00:32:23.791 [2024-11-25 23:33:56.014681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.015225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.015240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:23.791 [2024-11-25 23:33:56.015250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.496 ms 00:32:23.791 [2024-11-25 23:33:56.015256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.016933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.017040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:23.791 [2024-11-25 23:33:56.017051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.665 ms 00:32:23.791 [2024-11-25 23:33:56.017074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.017105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.017113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:23.791 [2024-11-25 23:33:56.017119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:23.791 [2024-11-25 23:33:56.017129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.017209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.017217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:23.791 [2024-11-25 23:33:56.017223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:23.791 [2024-11-25 23:33:56.017229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.017248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.017254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:23.791 [2024-11-25 23:33:56.017260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:23.791 [2024-11-25 23:33:56.017265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.017291] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:23.791 [2024-11-25 23:33:56.017298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.017305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:23.791 [2024-11-25 23:33:56.017311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:23.791 [2024-11-25 23:33:56.017317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.791 [2024-11-25 23:33:56.017358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.791 [2024-11-25 23:33:56.017364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:23.791 [2024-11-25 23:33:56.017370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:23.791 [2024-11-25 23:33:56.017377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.792 [2024-11-25 23:33:56.018238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1968.480 ms, result 0 00:32:23.792 [2024-11-25 23:33:56.030947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.792 [2024-11-25 23:33:56.046951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:23.792 [2024-11-25 23:33:56.055113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:23.792 Validate MD5 checksum, iteration 1 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:23.792 23:33:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:24.053 [2024-11-25 23:33:56.157636] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:24.054 [2024-11-25 23:33:56.157866] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84727 ] 00:32:24.054 [2024-11-25 23:33:56.313461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.054 [2024-11-25 23:33:56.408780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.971  [2024-11-25T23:33:58.599Z] Copying: 659/1024 [MB] (659 MBps) [2024-11-25T23:33:59.536Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:32:27.168 00:32:27.168 23:33:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:27.168 23:33:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:29.703 Validate MD5 checksum, iteration 2 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=480bc47511078da130eb04c4bd119a0b 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 480bc47511078da130eb04c4bd119a0b != \4\8\0\b\c\4\7\5\1\1\0\7\8\d\a\1\3\0\e\b\0\4\c\4\b\d\1\1\9\a\0\b ]] 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:29.703 23:34:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:29.703 [2024-11-25 23:34:01.575246] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:29.703 [2024-11-25 23:34:01.575357] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84784 ] 00:32:29.703 [2024-11-25 23:34:01.730569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.703 [2024-11-25 23:34:01.804610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.077  [2024-11-25T23:34:04.015Z] Copying: 659/1024 [MB] (659 MBps) [2024-11-25T23:34:09.298Z] Copying: 1024/1024 [MB] (average 649 MBps) 00:32:36.929 00:32:36.929 23:34:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:36.929 23:34:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d62609845388fab3d93be09f8fe11cd3 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d62609845388fab3d93be09f8fe11cd3 != \d\6\2\6\0\9\8\4\5\3\8\8\f\a\b\3\d\9\3\b\e\0\9\f\8\f\e\1\1\c\d\3 ]] 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84688 ]] 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84688 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84688 ']' 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84688 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84688 00:32:39.473 killing process with pid 84688 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84688' 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84688 00:32:39.473 23:34:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84688 00:32:39.734 [2024-11-25 23:34:12.031748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:39.734 [2024-11-25 23:34:12.043380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.734 [2024-11-25 23:34:12.043415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:39.734 [2024-11-25 23:34:12.043428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:39.734 [2024-11-25 23:34:12.043435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.734 [2024-11-25 23:34:12.043454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:39.734 [2024-11-25 23:34:12.045734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.734 [2024-11-25 23:34:12.045758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:39.734 [2024-11-25 23:34:12.045771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.268 ms 00:32:39.734 [2024-11-25 23:34:12.045778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.734 [2024-11-25 23:34:12.045962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.734 [2024-11-25 23:34:12.045975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:39.734 [2024-11-25 23:34:12.045984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:32:39.734 [2024-11-25 23:34:12.045991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.734 [2024-11-25 23:34:12.047484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.734 [2024-11-25 23:34:12.047510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:39.734 [2024-11-25 23:34:12.047518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.481 ms 00:32:39.734 [2024-11-25 23:34:12.047529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.734 [2024-11-25 23:34:12.048425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.734 [2024-11-25 23:34:12.048440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:39.734 [2024-11-25 23:34:12.048449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:32:39.734 [2024-11-25 23:34:12.048458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.734 [2024-11-25 23:34:12.056769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.734 [2024-11-25 23:34:12.056796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:39.734 [2024-11-25 23:34:12.056804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.283 ms 00:32:39.734 [2024-11-25 23:34:12.056815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.061295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.061321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:39.735 [2024-11-25 23:34:12.061331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.450 ms 00:32:39.735 [2024-11-25 23:34:12.061338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.061411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.061420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:39.735 [2024-11-25 23:34:12.061427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:32:39.735 [2024-11-25 23:34:12.061437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.069272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.069295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:39.735 [2024-11-25 23:34:12.069303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.822 ms 00:32:39.735 [2024-11-25 23:34:12.069310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.077020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.077207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:39.735 [2024-11-25 23:34:12.077222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.683 ms 00:32:39.735 [2024-11-25 23:34:12.077228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.084731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.084836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:39.735 [2024-11-25 23:34:12.084848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.478 ms 00:32:39.735 [2024-11-25 23:34:12.084854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.092117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.092141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:39.735 [2024-11-25 23:34:12.092149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.216 ms 00:32:39.735 [2024-11-25 23:34:12.092155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.735 [2024-11-25 23:34:12.092180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:39.735 [2024-11-25 23:34:12.092193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:39.735 [2024-11-25 23:34:12.092202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:39.735 [2024-11-25 23:34:12.092208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:39.735 [2024-11-25 23:34:12.092215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:39.735 [2024-11-25 23:34:12.092309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:39.735 [2024-11-25 23:34:12.092315] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a990302b-059b-4be7-a465-efdf8b4c486d 00:32:39.735 [2024-11-25 23:34:12.092321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:39.735 [2024-11-25 23:34:12.092327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:39.735 [2024-11-25 23:34:12.092332] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:39.735 [2024-11-25 23:34:12.092338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:39.735 [2024-11-25 23:34:12.092344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:39.735 [2024-11-25 23:34:12.092350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:39.735 [2024-11-25 23:34:12.092356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:39.735 [2024-11-25 23:34:12.092361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:39.735 [2024-11-25 23:34:12.092367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:39.735 [2024-11-25 23:34:12.092375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.735 [2024-11-25 23:34:12.092385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:39.735 [2024-11-25 23:34:12.092391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:32:39.735 [2024-11-25 23:34:12.092397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.102576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.996 [2024-11-25 23:34:12.102600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:39.996 [2024-11-25 23:34:12.102609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.166 ms 00:32:39.996 [2024-11-25 23:34:12.102616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.102920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:39.996 [2024-11-25 23:34:12.102928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:39.996 [2024-11-25 23:34:12.102935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:32:39.996 [2024-11-25 23:34:12.102941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.137892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.137919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:39.996 [2024-11-25 23:34:12.137928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.137934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.137965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.137972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:39.996 [2024-11-25 23:34:12.137979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.137984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.138051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.138081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:39.996 [2024-11-25 23:34:12.138088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.138094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.138111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.138119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:39.996 [2024-11-25 23:34:12.138125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.138131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.201020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.201066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:39.996 [2024-11-25 23:34:12.201075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.201082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.251980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:39.996 [2024-11-25 23:34:12.252026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:39.996 [2024-11-25 23:34:12.252124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:39.996 [2024-11-25 23:34:12.252204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:39.996 [2024-11-25 23:34:12.252323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:39.996 [2024-11-25 23:34:12.252376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:39.996 [2024-11-25 23:34:12.252433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:39.996 [2024-11-25 23:34:12.252489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:39.996 [2024-11-25 23:34:12.252498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:39.996 [2024-11-25 23:34:12.252505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:39.996 [2024-11-25 23:34:12.252611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 209.201 ms, result 0 00:32:40.567 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:40.567 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:40.830 Remove shared memory files 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84473 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:40.830 ************************************ 00:32:40.830 END TEST ftl_upgrade_shutdown 00:32:40.830 ************************************ 00:32:40.830 00:32:40.830 real 1m22.932s 00:32:40.830 user 1m53.299s 00:32:40.830 sys 0m19.740s 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.830 23:34:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:40.830 23:34:12 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:32:40.830 23:34:12 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:40.830 23:34:12 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:40.830 23:34:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.830 23:34:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:40.830 ************************************ 00:32:40.830 START TEST ftl_restore_fast 00:32:40.830 ************************************ 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:40.830 * Looking for test storage... 00:32:40.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.830 --rc genhtml_branch_coverage=1 00:32:40.830 --rc genhtml_function_coverage=1 00:32:40.830 --rc genhtml_legend=1 00:32:40.830 --rc geninfo_all_blocks=1 00:32:40.830 --rc geninfo_unexecuted_blocks=1 00:32:40.830 00:32:40.830 ' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.830 --rc genhtml_branch_coverage=1 00:32:40.830 --rc genhtml_function_coverage=1 00:32:40.830 --rc genhtml_legend=1 00:32:40.830 --rc geninfo_all_blocks=1 00:32:40.830 --rc geninfo_unexecuted_blocks=1 00:32:40.830 00:32:40.830 ' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.830 --rc genhtml_branch_coverage=1 00:32:40.830 --rc genhtml_function_coverage=1 00:32:40.830 --rc genhtml_legend=1 00:32:40.830 --rc geninfo_all_blocks=1 00:32:40.830 --rc geninfo_unexecuted_blocks=1 00:32:40.830 00:32:40.830 ' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.830 --rc genhtml_branch_coverage=1 00:32:40.830 --rc genhtml_function_coverage=1 00:32:40.830 --rc genhtml_legend=1 00:32:40.830 --rc geninfo_all_blocks=1 00:32:40.830 --rc geninfo_unexecuted_blocks=1 00:32:40.830 00:32:40.830 ' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:32:40.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.WyxkBifV97 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:32:40.830 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84978 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84978 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84978 ']' 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:40.831 23:34:13 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:41.092 [2024-11-25 23:34:13.268445] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:32:41.092 [2024-11-25 23:34:13.268578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84978 ] 00:32:41.092 [2024-11-25 23:34:13.427483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.352 [2024-11-25 23:34:13.536100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.924 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.924 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:32:41.924 23:34:14 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:41.924 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:32:41.924 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:41.924 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:32:41.925 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:32:41.925 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:42.185 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:42.186 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:42.447 { 00:32:42.447 "name": "nvme0n1", 00:32:42.447 "aliases": [ 00:32:42.447 "cda71be3-dfb5-4966-bbb1-3eebbd13d83f" 00:32:42.447 ], 00:32:42.447 "product_name": "NVMe disk", 00:32:42.447 "block_size": 4096, 00:32:42.447 "num_blocks": 1310720, 00:32:42.447 "uuid": "cda71be3-dfb5-4966-bbb1-3eebbd13d83f", 00:32:42.447 "numa_id": -1, 00:32:42.447 "assigned_rate_limits": { 00:32:42.447 "rw_ios_per_sec": 0, 00:32:42.447 "rw_mbytes_per_sec": 0, 00:32:42.447 "r_mbytes_per_sec": 0, 00:32:42.447 "w_mbytes_per_sec": 0 00:32:42.447 }, 00:32:42.447 "claimed": true, 00:32:42.447 "claim_type": "read_many_write_one", 00:32:42.447 "zoned": false, 00:32:42.447 "supported_io_types": { 00:32:42.447 "read": true, 00:32:42.447 "write": true, 00:32:42.447 "unmap": true, 00:32:42.447 "flush": true, 00:32:42.447 "reset": true, 00:32:42.447 "nvme_admin": true, 00:32:42.447 "nvme_io": true, 00:32:42.447 "nvme_io_md": false, 00:32:42.447 "write_zeroes": true, 00:32:42.447 "zcopy": false, 00:32:42.447 "get_zone_info": false, 00:32:42.447 "zone_management": false, 00:32:42.447 "zone_append": false, 00:32:42.447 "compare": true, 00:32:42.447 "compare_and_write": false, 00:32:42.447 "abort": true, 00:32:42.447 "seek_hole": false, 00:32:42.447 "seek_data": false, 00:32:42.447 "copy": true, 00:32:42.447 "nvme_iov_md": false 00:32:42.447 }, 00:32:42.447 "driver_specific": { 00:32:42.447 "nvme": [ 00:32:42.447 { 00:32:42.447 "pci_address": "0000:00:11.0", 00:32:42.447 "trid": { 00:32:42.447 "trtype": "PCIe", 00:32:42.447 "traddr": "0000:00:11.0" 00:32:42.447 }, 00:32:42.447 "ctrlr_data": { 00:32:42.447 "cntlid": 0, 00:32:42.447 "vendor_id": "0x1b36", 00:32:42.447 "model_number": "QEMU NVMe Ctrl", 00:32:42.447 "serial_number": "12341", 00:32:42.447 "firmware_revision": "8.0.0", 00:32:42.447 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:42.447 "oacs": { 00:32:42.447 "security": 0, 00:32:42.447 "format": 1, 00:32:42.447 "firmware": 0, 00:32:42.447 "ns_manage": 1 00:32:42.447 }, 00:32:42.447 "multi_ctrlr": false, 00:32:42.447 "ana_reporting": false 00:32:42.447 }, 00:32:42.447 "vs": { 00:32:42.447 "nvme_version": "1.4" 00:32:42.447 }, 00:32:42.447 "ns_data": { 00:32:42.447 "id": 1, 00:32:42.447 "can_share": false 00:32:42.447 } 00:32:42.447 } 00:32:42.447 ], 00:32:42.447 "mp_policy": "active_passive" 00:32:42.447 } 00:32:42.447 } 00:32:42.447 ]' 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:32:42.447 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:42.448 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:42.709 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=7382a443-48f9-42a5-a778-e002feb26095 00:32:42.709 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:32:42.709 23:34:14 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7382a443-48f9-42a5-a778-e002feb26095 00:32:42.709 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:42.971 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=550552f7-c00f-45e1-9214-99def0d1bd3e 00:32:42.971 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 550552f7-c00f-45e1-9214-99def0d1bd3e 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:43.233 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:43.495 { 00:32:43.495 "name": "d0bf69b9-0188-4c04-8428-8c74b249d2da", 00:32:43.495 "aliases": [ 00:32:43.495 "lvs/nvme0n1p0" 00:32:43.495 ], 00:32:43.495 "product_name": "Logical Volume", 00:32:43.495 "block_size": 4096, 00:32:43.495 "num_blocks": 26476544, 00:32:43.495 "uuid": "d0bf69b9-0188-4c04-8428-8c74b249d2da", 00:32:43.495 "assigned_rate_limits": { 00:32:43.495 "rw_ios_per_sec": 0, 00:32:43.495 "rw_mbytes_per_sec": 0, 00:32:43.495 "r_mbytes_per_sec": 0, 00:32:43.495 "w_mbytes_per_sec": 0 00:32:43.495 }, 00:32:43.495 "claimed": false, 00:32:43.495 "zoned": false, 00:32:43.495 "supported_io_types": { 00:32:43.495 "read": true, 00:32:43.495 "write": true, 00:32:43.495 "unmap": true, 00:32:43.495 "flush": false, 00:32:43.495 "reset": true, 00:32:43.495 "nvme_admin": false, 00:32:43.495 "nvme_io": false, 00:32:43.495 "nvme_io_md": false, 00:32:43.495 "write_zeroes": true, 00:32:43.495 "zcopy": false, 00:32:43.495 "get_zone_info": false, 00:32:43.495 "zone_management": false, 00:32:43.495 "zone_append": false, 00:32:43.495 "compare": false, 00:32:43.495 "compare_and_write": false, 00:32:43.495 "abort": false, 00:32:43.495 "seek_hole": true, 00:32:43.495 "seek_data": true, 00:32:43.495 "copy": false, 00:32:43.495 "nvme_iov_md": false 00:32:43.495 }, 00:32:43.495 "driver_specific": { 00:32:43.495 "lvol": { 00:32:43.495 "lvol_store_uuid": "550552f7-c00f-45e1-9214-99def0d1bd3e", 00:32:43.495 "base_bdev": "nvme0n1", 00:32:43.495 "thin_provision": true, 00:32:43.495 "num_allocated_clusters": 0, 00:32:43.495 "snapshot": false, 00:32:43.495 "clone": false, 00:32:43.495 "esnap_clone": false 00:32:43.495 } 00:32:43.495 } 00:32:43.495 } 00:32:43.495 ]' 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:32:43.495 23:34:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:43.757 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:44.019 { 00:32:44.019 "name": "d0bf69b9-0188-4c04-8428-8c74b249d2da", 00:32:44.019 "aliases": [ 00:32:44.019 "lvs/nvme0n1p0" 00:32:44.019 ], 00:32:44.019 "product_name": "Logical Volume", 00:32:44.019 "block_size": 4096, 00:32:44.019 "num_blocks": 26476544, 00:32:44.019 "uuid": "d0bf69b9-0188-4c04-8428-8c74b249d2da", 00:32:44.019 "assigned_rate_limits": { 00:32:44.019 "rw_ios_per_sec": 0, 00:32:44.019 "rw_mbytes_per_sec": 0, 00:32:44.019 "r_mbytes_per_sec": 0, 00:32:44.019 "w_mbytes_per_sec": 0 00:32:44.019 }, 00:32:44.019 "claimed": false, 00:32:44.019 "zoned": false, 00:32:44.019 "supported_io_types": { 00:32:44.019 "read": true, 00:32:44.019 "write": true, 00:32:44.019 "unmap": true, 00:32:44.019 "flush": false, 00:32:44.019 "reset": true, 00:32:44.019 "nvme_admin": false, 00:32:44.019 "nvme_io": false, 00:32:44.019 "nvme_io_md": false, 00:32:44.019 "write_zeroes": true, 00:32:44.019 "zcopy": false, 00:32:44.019 "get_zone_info": false, 00:32:44.019 "zone_management": false, 00:32:44.019 "zone_append": false, 00:32:44.019 "compare": false, 00:32:44.019 "compare_and_write": false, 00:32:44.019 "abort": false, 00:32:44.019 "seek_hole": true, 00:32:44.019 "seek_data": true, 00:32:44.019 "copy": false, 00:32:44.019 "nvme_iov_md": false 00:32:44.019 }, 00:32:44.019 "driver_specific": { 00:32:44.019 "lvol": { 00:32:44.019 "lvol_store_uuid": "550552f7-c00f-45e1-9214-99def0d1bd3e", 00:32:44.019 "base_bdev": "nvme0n1", 00:32:44.019 "thin_provision": true, 00:32:44.019 "num_allocated_clusters": 0, 00:32:44.019 "snapshot": false, 00:32:44.019 "clone": false, 00:32:44.019 "esnap_clone": false 00:32:44.019 } 00:32:44.019 } 00:32:44.019 } 00:32:44.019 ]' 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:32:44.019 23:34:16 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:44.280 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0bf69b9-0188-4c04-8428-8c74b249d2da 00:32:44.542 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:44.542 { 00:32:44.542 "name": "d0bf69b9-0188-4c04-8428-8c74b249d2da", 00:32:44.542 "aliases": [ 00:32:44.542 "lvs/nvme0n1p0" 00:32:44.542 ], 00:32:44.542 "product_name": "Logical Volume", 00:32:44.542 "block_size": 4096, 00:32:44.542 "num_blocks": 26476544, 00:32:44.542 "uuid": "d0bf69b9-0188-4c04-8428-8c74b249d2da", 00:32:44.542 "assigned_rate_limits": { 00:32:44.542 "rw_ios_per_sec": 0, 00:32:44.542 "rw_mbytes_per_sec": 0, 00:32:44.542 "r_mbytes_per_sec": 0, 00:32:44.542 "w_mbytes_per_sec": 0 00:32:44.542 }, 00:32:44.542 "claimed": false, 00:32:44.542 "zoned": false, 00:32:44.542 "supported_io_types": { 00:32:44.543 "read": true, 00:32:44.543 "write": true, 00:32:44.543 "unmap": true, 00:32:44.543 "flush": false, 00:32:44.543 "reset": true, 00:32:44.543 "nvme_admin": false, 00:32:44.543 "nvme_io": false, 00:32:44.543 "nvme_io_md": false, 00:32:44.543 "write_zeroes": true, 00:32:44.543 "zcopy": false, 00:32:44.543 "get_zone_info": false, 00:32:44.543 "zone_management": false, 00:32:44.543 "zone_append": false, 00:32:44.543 "compare": false, 00:32:44.543 "compare_and_write": false, 00:32:44.543 "abort": false, 00:32:44.543 "seek_hole": true, 00:32:44.543 "seek_data": true, 00:32:44.543 "copy": false, 00:32:44.543 "nvme_iov_md": false 00:32:44.543 }, 00:32:44.543 "driver_specific": { 00:32:44.543 "lvol": { 00:32:44.543 "lvol_store_uuid": "550552f7-c00f-45e1-9214-99def0d1bd3e", 00:32:44.543 "base_bdev": "nvme0n1", 00:32:44.543 "thin_provision": true, 00:32:44.543 "num_allocated_clusters": 0, 00:32:44.543 "snapshot": false, 00:32:44.543 "clone": false, 00:32:44.543 "esnap_clone": false 00:32:44.543 } 00:32:44.543 } 00:32:44.543 } 00:32:44.543 ]' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d0bf69b9-0188-4c04-8428-8c74b249d2da --l2p_dram_limit 10' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:32:44.543 23:34:16 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d0bf69b9-0188-4c04-8428-8c74b249d2da --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:32:44.805 [2024-11-25 23:34:16.914702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.805 [2024-11-25 23:34:16.914747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:44.805 [2024-11-25 23:34:16.914760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:44.805 [2024-11-25 23:34:16.914767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.805 [2024-11-25 23:34:16.914807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.805 [2024-11-25 23:34:16.914815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:44.805 [2024-11-25 23:34:16.914823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:44.805 [2024-11-25 23:34:16.914829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.805 [2024-11-25 23:34:16.914849] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:44.805 [2024-11-25 23:34:16.915384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:44.805 [2024-11-25 23:34:16.915407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.805 [2024-11-25 23:34:16.915413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:44.805 [2024-11-25 23:34:16.915421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:32:44.806 [2024-11-25 23:34:16.915427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.915476] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 85c08bd9-98e2-4ce8-9a59-5099f5a41b5c 00:32:44.806 [2024-11-25 23:34:16.916763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.916793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:44.806 [2024-11-25 23:34:16.916802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:44.806 [2024-11-25 23:34:16.916813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.923831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.923862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:44.806 [2024-11-25 23:34:16.923870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.978 ms 00:32:44.806 [2024-11-25 23:34:16.923877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.923948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.923957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:44.806 [2024-11-25 23:34:16.923964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:44.806 [2024-11-25 23:34:16.923975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.924006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.924016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:44.806 [2024-11-25 23:34:16.924024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:44.806 [2024-11-25 23:34:16.924032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.924048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:44.806 [2024-11-25 23:34:16.927345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.927370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:44.806 [2024-11-25 23:34:16.927380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:32:44.806 [2024-11-25 23:34:16.927385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.927413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.927419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:44.806 [2024-11-25 23:34:16.927427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:44.806 [2024-11-25 23:34:16.927433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.927453] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:44.806 [2024-11-25 23:34:16.927564] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:44.806 [2024-11-25 23:34:16.927577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:44.806 [2024-11-25 23:34:16.927585] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:44.806 [2024-11-25 23:34:16.927595] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927601] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:44.806 [2024-11-25 23:34:16.927615] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:44.806 [2024-11-25 23:34:16.927624] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:44.806 [2024-11-25 23:34:16.927629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:44.806 [2024-11-25 23:34:16.927637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.927649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:44.806 [2024-11-25 23:34:16.927658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:32:44.806 [2024-11-25 23:34:16.927664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.927729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.806 [2024-11-25 23:34:16.927735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:44.806 [2024-11-25 23:34:16.927743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:44.806 [2024-11-25 23:34:16.927748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.806 [2024-11-25 23:34:16.927828] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:44.806 [2024-11-25 23:34:16.927836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:44.806 [2024-11-25 23:34:16.927844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:44.806 [2024-11-25 23:34:16.927862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:44.806 [2024-11-25 23:34:16.927883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:44.806 [2024-11-25 23:34:16.927896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:44.806 [2024-11-25 23:34:16.927901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:44.806 [2024-11-25 23:34:16.927908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:44.806 [2024-11-25 23:34:16.927913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:44.806 [2024-11-25 23:34:16.927919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:44.806 [2024-11-25 23:34:16.927925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:44.806 [2024-11-25 23:34:16.927940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:44.806 [2024-11-25 23:34:16.927959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:44.806 [2024-11-25 23:34:16.927975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.806 [2024-11-25 23:34:16.927987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:44.806 [2024-11-25 23:34:16.927993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:44.806 [2024-11-25 23:34:16.927999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.806 [2024-11-25 23:34:16.928006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:44.806 [2024-11-25 23:34:16.928011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:44.806 [2024-11-25 23:34:16.928017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:44.806 [2024-11-25 23:34:16.928022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:44.806 [2024-11-25 23:34:16.928030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:44.806 [2024-11-25 23:34:16.928035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:44.806 [2024-11-25 23:34:16.928041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:44.806 [2024-11-25 23:34:16.928046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:44.806 [2024-11-25 23:34:16.928053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:44.806 [2024-11-25 23:34:16.928069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:44.806 [2024-11-25 23:34:16.928076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:44.807 [2024-11-25 23:34:16.928081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.807 [2024-11-25 23:34:16.928089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:44.807 [2024-11-25 23:34:16.928094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:44.807 [2024-11-25 23:34:16.928101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.807 [2024-11-25 23:34:16.928105] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:44.807 [2024-11-25 23:34:16.928114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:44.807 [2024-11-25 23:34:16.928120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:44.807 [2024-11-25 23:34:16.928127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:44.807 [2024-11-25 23:34:16.928134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:44.807 [2024-11-25 23:34:16.928142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:44.807 [2024-11-25 23:34:16.928147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:44.807 [2024-11-25 23:34:16.928154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:44.807 [2024-11-25 23:34:16.928160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:44.807 [2024-11-25 23:34:16.928166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:44.807 [2024-11-25 23:34:16.928174] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:44.807 [2024-11-25 23:34:16.928185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:44.807 [2024-11-25 23:34:16.928199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:44.807 [2024-11-25 23:34:16.928205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:44.807 [2024-11-25 23:34:16.928212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:44.807 [2024-11-25 23:34:16.928218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:44.807 [2024-11-25 23:34:16.928225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:44.807 [2024-11-25 23:34:16.928231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:44.807 [2024-11-25 23:34:16.928238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:44.807 [2024-11-25 23:34:16.928244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:44.807 [2024-11-25 23:34:16.928254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:44.807 [2024-11-25 23:34:16.928287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:44.807 [2024-11-25 23:34:16.928294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:44.807 [2024-11-25 23:34:16.928310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:44.807 [2024-11-25 23:34:16.928316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:44.807 [2024-11-25 23:34:16.928324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:44.807 [2024-11-25 23:34:16.928331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:44.807 [2024-11-25 23:34:16.928338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:44.807 [2024-11-25 23:34:16.928345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:32:44.807 [2024-11-25 23:34:16.928352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.807 [2024-11-25 23:34:16.928392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:44.807 [2024-11-25 23:34:16.928408] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:49.018 [2024-11-25 23:34:20.650670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.650731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:49.018 [2024-11-25 23:34:20.650745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3722.261 ms 00:32:49.018 [2024-11-25 23:34:20.650754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.674406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.674450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:49.018 [2024-11-25 23:34:20.674463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.473 ms 00:32:49.018 [2024-11-25 23:34:20.674472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.674581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.674591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:49.018 [2024-11-25 23:34:20.674598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:49.018 [2024-11-25 23:34:20.674612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.701249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.701282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:49.018 [2024-11-25 23:34:20.701292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.596 ms 00:32:49.018 [2024-11-25 23:34:20.701301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.701329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.701338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:49.018 [2024-11-25 23:34:20.701344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:49.018 [2024-11-25 23:34:20.701358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.701761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.701896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:49.018 [2024-11-25 23:34:20.701904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:32:49.018 [2024-11-25 23:34:20.701912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.701995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.702004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:49.018 [2024-11-25 23:34:20.702013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:49.018 [2024-11-25 23:34:20.702023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.018 [2024-11-25 23:34:20.715068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.018 [2024-11-25 23:34:20.715098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:49.018 [2024-11-25 23:34:20.715106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.031 ms 00:32:49.019 [2024-11-25 23:34:20.715114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.725149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:49.019 [2024-11-25 23:34:20.728050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.728093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:49.019 [2024-11-25 23:34:20.728102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.877 ms 00:32:49.019 [2024-11-25 23:34:20.728109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.817610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.817644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:49.019 [2024-11-25 23:34:20.817657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.478 ms 00:32:49.019 [2024-11-25 23:34:20.817665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.817815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.817827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:49.019 [2024-11-25 23:34:20.817840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:32:49.019 [2024-11-25 23:34:20.817846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.836401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.836430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:49.019 [2024-11-25 23:34:20.836442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:32:49.019 [2024-11-25 23:34:20.836449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.854563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.854589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:49.019 [2024-11-25 23:34:20.854600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.081 ms 00:32:49.019 [2024-11-25 23:34:20.854606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.855051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.855075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:49.019 [2024-11-25 23:34:20.855085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:32:49.019 [2024-11-25 23:34:20.855093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.918257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.918285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:49.019 [2024-11-25 23:34:20.918297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.128 ms 00:32:49.019 [2024-11-25 23:34:20.918305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.938312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.938339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:49.019 [2024-11-25 23:34:20.938350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.953 ms 00:32:49.019 [2024-11-25 23:34:20.938356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.956396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.956422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:49.019 [2024-11-25 23:34:20.956432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.008 ms 00:32:49.019 [2024-11-25 23:34:20.956438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.975461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.975487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:49.019 [2024-11-25 23:34:20.975497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.993 ms 00:32:49.019 [2024-11-25 23:34:20.975503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.975536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.975543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:49.019 [2024-11-25 23:34:20.975554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:49.019 [2024-11-25 23:34:20.975560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.975635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:20.975645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:49.019 [2024-11-25 23:34:20.975654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:49.019 [2024-11-25 23:34:20.975661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:20.976484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4061.400 ms, result 0 00:32:49.019 { 00:32:49.019 "name": "ftl0", 00:32:49.019 "uuid": "85c08bd9-98e2-4ce8-9a59-5099f5a41b5c" 00:32:49.019 } 00:32:49.019 23:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:49.019 23:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:49.019 23:34:21 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:49.019 23:34:21 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:49.019 [2024-11-25 23:34:21.380023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.019 [2024-11-25 23:34:21.380071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:49.019 [2024-11-25 23:34:21.380080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:49.019 [2024-11-25 23:34:21.380088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.019 [2024-11-25 23:34:21.380107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:49.282 [2024-11-25 23:34:21.382385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.382408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:49.282 [2024-11-25 23:34:21.382417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:32:49.282 [2024-11-25 23:34:21.382424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.382624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.382635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:49.282 [2024-11-25 23:34:21.382643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:32:49.282 [2024-11-25 23:34:21.382649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.385116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.385133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:49.282 [2024-11-25 23:34:21.385142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.453 ms 00:32:49.282 [2024-11-25 23:34:21.385148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.389841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.389862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:49.282 [2024-11-25 23:34:21.389873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:32:49.282 [2024-11-25 23:34:21.389881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.407899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.407924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:49.282 [2024-11-25 23:34:21.407935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:32:49.282 [2024-11-25 23:34:21.407941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.421089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.421115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:49.282 [2024-11-25 23:34:21.421126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.116 ms 00:32:49.282 [2024-11-25 23:34:21.421133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.421245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.421253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:49.282 [2024-11-25 23:34:21.421261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:49.282 [2024-11-25 23:34:21.421268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.440088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.440112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:49.282 [2024-11-25 23:34:21.440122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.803 ms 00:32:49.282 [2024-11-25 23:34:21.440127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.458429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.458453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:49.282 [2024-11-25 23:34:21.458463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.273 ms 00:32:49.282 [2024-11-25 23:34:21.458468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.475995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.476020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:49.282 [2024-11-25 23:34:21.476029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.496 ms 00:32:49.282 [2024-11-25 23:34:21.476035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.493845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.282 [2024-11-25 23:34:21.493871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:49.282 [2024-11-25 23:34:21.493880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.737 ms 00:32:49.282 [2024-11-25 23:34:21.493886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.282 [2024-11-25 23:34:21.493913] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:49.282 [2024-11-25 23:34:21.493925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.493998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:49.282 [2024-11-25 23:34:21.494150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:49.283 [2024-11-25 23:34:21.494632] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:49.283 [2024-11-25 23:34:21.494639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c08bd9-98e2-4ce8-9a59-5099f5a41b5c 00:32:49.283 [2024-11-25 23:34:21.494645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:49.283 [2024-11-25 23:34:21.494654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:49.283 [2024-11-25 23:34:21.494662] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:49.283 [2024-11-25 23:34:21.494670] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:49.283 [2024-11-25 23:34:21.494675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:49.283 [2024-11-25 23:34:21.494682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:49.283 [2024-11-25 23:34:21.494688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:49.283 [2024-11-25 23:34:21.494694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:49.283 [2024-11-25 23:34:21.494699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:49.283 [2024-11-25 23:34:21.494706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.283 [2024-11-25 23:34:21.494712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:49.283 [2024-11-25 23:34:21.494720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:32:49.283 [2024-11-25 23:34:21.494729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.283 [2024-11-25 23:34:21.504358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.283 [2024-11-25 23:34:21.504381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:49.283 [2024-11-25 23:34:21.504391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.604 ms 00:32:49.283 [2024-11-25 23:34:21.504397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.283 [2024-11-25 23:34:21.504651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.283 [2024-11-25 23:34:21.504659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:49.283 [2024-11-25 23:34:21.504669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:32:49.283 [2024-11-25 23:34:21.504674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.283 [2024-11-25 23:34:21.539596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.283 [2024-11-25 23:34:21.539623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:49.283 [2024-11-25 23:34:21.539633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.284 [2024-11-25 23:34:21.539639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.284 [2024-11-25 23:34:21.539687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.284 [2024-11-25 23:34:21.539694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:49.284 [2024-11-25 23:34:21.539704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.284 [2024-11-25 23:34:21.539709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.284 [2024-11-25 23:34:21.539761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.284 [2024-11-25 23:34:21.539769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:49.284 [2024-11-25 23:34:21.539777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.284 [2024-11-25 23:34:21.539783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.284 [2024-11-25 23:34:21.539801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.284 [2024-11-25 23:34:21.539809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:49.284 [2024-11-25 23:34:21.539818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.284 [2024-11-25 23:34:21.539826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.284 [2024-11-25 23:34:21.603201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.284 [2024-11-25 23:34:21.603235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:49.284 [2024-11-25 23:34:21.603245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.284 [2024-11-25 23:34:21.603252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.654731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.654765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:49.546 [2024-11-25 23:34:21.654777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.654786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.654874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.654882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:49.546 [2024-11-25 23:34:21.654890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.654896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.654937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.654946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:49.546 [2024-11-25 23:34:21.654955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.654960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.655044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.655053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:49.546 [2024-11-25 23:34:21.655073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.655080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.655110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.655117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:49.546 [2024-11-25 23:34:21.655126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.655133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.655172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.655181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:49.546 [2024-11-25 23:34:21.655190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.655195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.655241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.546 [2024-11-25 23:34:21.655249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:49.546 [2024-11-25 23:34:21.655258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.546 [2024-11-25 23:34:21.655264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.546 [2024-11-25 23:34:21.655384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.327 ms, result 0 00:32:49.546 true 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84978 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84978 ']' 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84978 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84978 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84978' 00:32:49.546 killing process with pid 84978 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84978 00:32:49.546 23:34:21 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84978 00:32:56.138 23:34:27 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:59.436 262144+0 records in 00:32:59.436 262144+0 records out 00:32:59.436 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.85244 s, 279 MB/s 00:32:59.436 23:34:31 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:00.823 23:34:32 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:00.823 [2024-11-25 23:34:32.805137] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:33:00.823 [2024-11-25 23:34:32.805239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85199 ] 00:33:00.823 [2024-11-25 23:34:32.955108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.823 [2024-11-25 23:34:33.045722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.086 [2024-11-25 23:34:33.272571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:01.086 [2024-11-25 23:34:33.272628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:01.086 [2024-11-25 23:34:33.428261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.428303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:01.086 [2024-11-25 23:34:33.428314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:01.086 [2024-11-25 23:34:33.428322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.428363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.428373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:01.086 [2024-11-25 23:34:33.428381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:01.086 [2024-11-25 23:34:33.428387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.428400] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:01.086 [2024-11-25 23:34:33.428972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:01.086 [2024-11-25 23:34:33.428990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.428996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:01.086 [2024-11-25 23:34:33.429004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:33:01.086 [2024-11-25 23:34:33.429009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.430328] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:01.086 [2024-11-25 23:34:33.440443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.440469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:01.086 [2024-11-25 23:34:33.440479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.117 ms 00:33:01.086 [2024-11-25 23:34:33.440485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.440531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.440539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:01.086 [2024-11-25 23:34:33.440546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:01.086 [2024-11-25 23:34:33.440552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.446831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.446855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:01.086 [2024-11-25 23:34:33.446863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.238 ms 00:33:01.086 [2024-11-25 23:34:33.446873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.446928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.446935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:01.086 [2024-11-25 23:34:33.446942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:01.086 [2024-11-25 23:34:33.446948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.446986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.086 [2024-11-25 23:34:33.446994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:01.086 [2024-11-25 23:34:33.447000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:01.086 [2024-11-25 23:34:33.447006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.086 [2024-11-25 23:34:33.447023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:01.349 [2024-11-25 23:34:33.449942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.349 [2024-11-25 23:34:33.449965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:01.349 [2024-11-25 23:34:33.449975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.923 ms 00:33:01.349 [2024-11-25 23:34:33.449981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.349 [2024-11-25 23:34:33.450007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.349 [2024-11-25 23:34:33.450014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:01.349 [2024-11-25 23:34:33.450020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:01.349 [2024-11-25 23:34:33.450026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.349 [2024-11-25 23:34:33.450040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:01.349 [2024-11-25 23:34:33.450072] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:01.349 [2024-11-25 23:34:33.450101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:01.349 [2024-11-25 23:34:33.450117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:01.349 [2024-11-25 23:34:33.450201] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:01.349 [2024-11-25 23:34:33.450209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:01.349 [2024-11-25 23:34:33.450218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:01.349 [2024-11-25 23:34:33.450226] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:01.349 [2024-11-25 23:34:33.450233] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:01.349 [2024-11-25 23:34:33.450239] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:01.349 [2024-11-25 23:34:33.450246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:01.349 [2024-11-25 23:34:33.450252] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:01.349 [2024-11-25 23:34:33.450260] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:01.349 [2024-11-25 23:34:33.450267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.349 [2024-11-25 23:34:33.450272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:01.349 [2024-11-25 23:34:33.450279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:33:01.349 [2024-11-25 23:34:33.450285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.349 [2024-11-25 23:34:33.450348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.349 [2024-11-25 23:34:33.450356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:01.349 [2024-11-25 23:34:33.450362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:01.349 [2024-11-25 23:34:33.450368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.349 [2024-11-25 23:34:33.450447] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:01.349 [2024-11-25 23:34:33.450461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:01.349 [2024-11-25 23:34:33.450468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:01.349 [2024-11-25 23:34:33.450474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.349 [2024-11-25 23:34:33.450480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:01.349 [2024-11-25 23:34:33.450487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:01.349 [2024-11-25 23:34:33.450492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:01.349 [2024-11-25 23:34:33.450498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:01.349 [2024-11-25 23:34:33.450503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:01.349 [2024-11-25 23:34:33.450509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:01.349 [2024-11-25 23:34:33.450516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:01.349 [2024-11-25 23:34:33.450521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:01.349 [2024-11-25 23:34:33.450526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:01.349 [2024-11-25 23:34:33.450536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:01.349 [2024-11-25 23:34:33.450541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:01.349 [2024-11-25 23:34:33.450547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.349 [2024-11-25 23:34:33.450553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:01.349 [2024-11-25 23:34:33.450559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:01.349 [2024-11-25 23:34:33.450564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.349 [2024-11-25 23:34:33.450569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:01.349 [2024-11-25 23:34:33.450575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:01.349 [2024-11-25 23:34:33.450580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.349 [2024-11-25 23:34:33.450585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:01.350 [2024-11-25 23:34:33.450590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.350 [2024-11-25 23:34:33.450600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:01.350 [2024-11-25 23:34:33.450605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.350 [2024-11-25 23:34:33.450616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:01.350 [2024-11-25 23:34:33.450622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:01.350 [2024-11-25 23:34:33.450631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:01.350 [2024-11-25 23:34:33.450636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:01.350 [2024-11-25 23:34:33.450647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:01.350 [2024-11-25 23:34:33.450651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:01.350 [2024-11-25 23:34:33.450656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:01.350 [2024-11-25 23:34:33.450663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:01.350 [2024-11-25 23:34:33.450669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:01.350 [2024-11-25 23:34:33.450681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:01.350 [2024-11-25 23:34:33.450693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:01.350 [2024-11-25 23:34:33.450699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450704] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:01.350 [2024-11-25 23:34:33.450710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:01.350 [2024-11-25 23:34:33.450716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:01.350 [2024-11-25 23:34:33.450722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:01.350 [2024-11-25 23:34:33.450728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:01.350 [2024-11-25 23:34:33.450734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:01.350 [2024-11-25 23:34:33.450740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:01.350 [2024-11-25 23:34:33.450745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:01.350 [2024-11-25 23:34:33.450751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:01.350 [2024-11-25 23:34:33.450756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:01.350 [2024-11-25 23:34:33.450763] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:01.350 [2024-11-25 23:34:33.450771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:01.350 [2024-11-25 23:34:33.450787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:01.350 [2024-11-25 23:34:33.450793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:01.350 [2024-11-25 23:34:33.450799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:01.350 [2024-11-25 23:34:33.450805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:01.350 [2024-11-25 23:34:33.450811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:01.350 [2024-11-25 23:34:33.450817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:01.350 [2024-11-25 23:34:33.450822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:01.350 [2024-11-25 23:34:33.450827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:01.350 [2024-11-25 23:34:33.450833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:01.350 [2024-11-25 23:34:33.450859] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:01.350 [2024-11-25 23:34:33.450865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:01.350 [2024-11-25 23:34:33.450876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:01.350 [2024-11-25 23:34:33.450882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:01.350 [2024-11-25 23:34:33.450888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:01.350 [2024-11-25 23:34:33.450896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.450902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:01.350 [2024-11-25 23:34:33.450909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:33:01.350 [2024-11-25 23:34:33.450914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.474980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.475009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:01.350 [2024-11-25 23:34:33.475019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.025 ms 00:33:01.350 [2024-11-25 23:34:33.475029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.475107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.475114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:01.350 [2024-11-25 23:34:33.475120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:01.350 [2024-11-25 23:34:33.475126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.519066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.519098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:01.350 [2024-11-25 23:34:33.519107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.898 ms 00:33:01.350 [2024-11-25 23:34:33.519114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.519148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.519156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:01.350 [2024-11-25 23:34:33.519165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:01.350 [2024-11-25 23:34:33.519171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.519578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.519603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:01.350 [2024-11-25 23:34:33.519611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:33:01.350 [2024-11-25 23:34:33.519617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.519726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.519742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:01.350 [2024-11-25 23:34:33.519748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:33:01.350 [2024-11-25 23:34:33.519760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.531576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.531602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:01.350 [2024-11-25 23:34:33.531612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.798 ms 00:33:01.350 [2024-11-25 23:34:33.531618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.542245] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:01.350 [2024-11-25 23:34:33.542275] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:01.350 [2024-11-25 23:34:33.542285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.542291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:01.350 [2024-11-25 23:34:33.542299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.578 ms 00:33:01.350 [2024-11-25 23:34:33.542304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.561040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.561075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:01.350 [2024-11-25 23:34:33.561084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.703 ms 00:33:01.350 [2024-11-25 23:34:33.561091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.570374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.570400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:01.350 [2024-11-25 23:34:33.570407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.252 ms 00:33:01.350 [2024-11-25 23:34:33.570413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.350 [2024-11-25 23:34:33.579268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.350 [2024-11-25 23:34:33.579294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:01.350 [2024-11-25 23:34:33.579302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.829 ms 00:33:01.350 [2024-11-25 23:34:33.579308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.579758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.579774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:01.351 [2024-11-25 23:34:33.579782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:33:01.351 [2024-11-25 23:34:33.579791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.628070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.628101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:01.351 [2024-11-25 23:34:33.628111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.267 ms 00:33:01.351 [2024-11-25 23:34:33.628121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.637040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:01.351 [2024-11-25 23:34:33.639505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.639528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:01.351 [2024-11-25 23:34:33.639538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.351 ms 00:33:01.351 [2024-11-25 23:34:33.639546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.639601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.639610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:01.351 [2024-11-25 23:34:33.639617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:01.351 [2024-11-25 23:34:33.639624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.639700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.639710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:01.351 [2024-11-25 23:34:33.639717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:01.351 [2024-11-25 23:34:33.639723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.639740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.639747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:01.351 [2024-11-25 23:34:33.639754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:01.351 [2024-11-25 23:34:33.639761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.639790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:01.351 [2024-11-25 23:34:33.639802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.639808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:01.351 [2024-11-25 23:34:33.639816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:01.351 [2024-11-25 23:34:33.639822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.657881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.657909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:01.351 [2024-11-25 23:34:33.657917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.045 ms 00:33:01.351 [2024-11-25 23:34:33.657925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.657984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.351 [2024-11-25 23:34:33.657992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:01.351 [2024-11-25 23:34:33.658000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:01.351 [2024-11-25 23:34:33.658007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.351 [2024-11-25 23:34:33.658849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.225 ms, result 0 00:33:02.740  [2024-11-25T23:34:35.684Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-25T23:34:37.068Z] Copying: 46/1024 [MB] (22 MBps) [2024-11-25T23:34:38.014Z] Copying: 63/1024 [MB] (17 MBps) [2024-11-25T23:34:38.960Z] Copying: 79/1024 [MB] (16 MBps) [2024-11-25T23:34:39.906Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-25T23:34:41.019Z] Copying: 113/1024 [MB] (16 MBps) [2024-11-25T23:34:41.963Z] Copying: 125/1024 [MB] (12 MBps) [2024-11-25T23:34:42.908Z] Copying: 136/1024 [MB] (11 MBps) [2024-11-25T23:34:43.852Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-25T23:34:44.795Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-25T23:34:45.742Z] Copying: 170/1024 [MB] (11 MBps) [2024-11-25T23:34:46.687Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-25T23:34:48.076Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-25T23:34:49.022Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-25T23:34:49.967Z] Copying: 212/1024 [MB] (11 MBps) [2024-11-25T23:34:50.912Z] Copying: 224/1024 [MB] (11 MBps) [2024-11-25T23:34:51.859Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-25T23:34:52.804Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-25T23:34:53.747Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-25T23:34:54.692Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-25T23:34:56.080Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-25T23:34:57.026Z] Copying: 291/1024 [MB] (11 MBps) [2024-11-25T23:34:57.972Z] Copying: 302/1024 [MB] (11 MBps) [2024-11-25T23:34:58.918Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-25T23:34:59.860Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-25T23:35:00.806Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-25T23:35:01.751Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-25T23:35:02.696Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-25T23:35:04.084Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-25T23:35:05.030Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-25T23:35:05.975Z] Copying: 394/1024 [MB] (11 MBps) [2024-11-25T23:35:06.964Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-25T23:35:07.908Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-25T23:35:08.851Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-25T23:35:09.794Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-25T23:35:10.735Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-25T23:35:11.679Z] Copying: 463/1024 [MB] (11 MBps) [2024-11-25T23:35:13.065Z] Copying: 474/1024 [MB] (11 MBps) [2024-11-25T23:35:14.009Z] Copying: 487/1024 [MB] (12 MBps) [2024-11-25T23:35:14.953Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-25T23:35:15.897Z] Copying: 510/1024 [MB] (11 MBps) [2024-11-25T23:35:16.843Z] Copying: 521/1024 [MB] (11 MBps) [2024-11-25T23:35:17.786Z] Copying: 531/1024 [MB] (10 MBps) [2024-11-25T23:35:18.727Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-25T23:35:20.109Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-25T23:35:20.681Z] Copying: 565/1024 [MB] (11 MBps) [2024-11-25T23:35:22.067Z] Copying: 577/1024 [MB] (12 MBps) [2024-11-25T23:35:23.013Z] Copying: 589/1024 [MB] (11 MBps) [2024-11-25T23:35:23.955Z] Copying: 600/1024 [MB] (11 MBps) [2024-11-25T23:35:24.897Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-25T23:35:25.840Z] Copying: 622/1024 [MB] (11 MBps) [2024-11-25T23:35:26.783Z] Copying: 634/1024 [MB] (11 MBps) [2024-11-25T23:35:27.726Z] Copying: 645/1024 [MB] (11 MBps) [2024-11-25T23:35:29.108Z] Copying: 657/1024 [MB] (11 MBps) [2024-11-25T23:35:29.678Z] Copying: 669/1024 [MB] (11 MBps) [2024-11-25T23:35:31.062Z] Copying: 680/1024 [MB] (11 MBps) [2024-11-25T23:35:32.007Z] Copying: 692/1024 [MB] (11 MBps) [2024-11-25T23:35:32.951Z] Copying: 704/1024 [MB] (12 MBps) [2024-11-25T23:35:33.896Z] Copying: 716/1024 [MB] (11 MBps) [2024-11-25T23:35:34.840Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-25T23:35:35.784Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-25T23:35:36.727Z] Copying: 749/1024 [MB] (10 MBps) [2024-11-25T23:35:38.113Z] Copying: 761/1024 [MB] (11 MBps) [2024-11-25T23:35:38.685Z] Copying: 772/1024 [MB] (11 MBps) [2024-11-25T23:35:40.072Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-25T23:35:41.054Z] Copying: 795/1024 [MB] (11 MBps) [2024-11-25T23:35:42.011Z] Copying: 808/1024 [MB] (12 MBps) [2024-11-25T23:35:42.953Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-25T23:35:43.910Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-25T23:35:44.852Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-25T23:35:45.794Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-25T23:35:46.737Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-25T23:35:47.681Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-25T23:35:49.068Z] Copying: 889/1024 [MB] (16 MBps) [2024-11-25T23:35:50.010Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-25T23:35:50.952Z] Copying: 911/1024 [MB] (11 MBps) [2024-11-25T23:35:51.894Z] Copying: 922/1024 [MB] (11 MBps) [2024-11-25T23:35:52.837Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-25T23:35:53.781Z] Copying: 945/1024 [MB] (11 MBps) [2024-11-25T23:35:54.724Z] Copying: 956/1024 [MB] (10 MBps) [2024-11-25T23:35:56.110Z] Copying: 967/1024 [MB] (11 MBps) [2024-11-25T23:35:56.682Z] Copying: 978/1024 [MB] (11 MBps) [2024-11-25T23:35:58.067Z] Copying: 990/1024 [MB] (11 MBps) [2024-11-25T23:35:59.010Z] Copying: 1001/1024 [MB] (11 MBps) [2024-11-25T23:35:59.953Z] Copying: 1012/1024 [MB] (11 MBps) [2024-11-25T23:35:59.953Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-25T23:35:59.953Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-25 23:35:59.693889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.584 [2024-11-25 23:35:59.693938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:27.584 [2024-11-25 23:35:59.693951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:27.584 [2024-11-25 23:35:59.693958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.584 [2024-11-25 23:35:59.693974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:27.584 [2024-11-25 23:35:59.696223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.584 [2024-11-25 23:35:59.696246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:27.584 [2024-11-25 23:35:59.696255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:34:27.584 [2024-11-25 23:35:59.696266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.584 [2024-11-25 23:35:59.698746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.585 [2024-11-25 23:35:59.698769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:27.585 [2024-11-25 23:35:59.698778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:34:27.585 [2024-11-25 23:35:59.698784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.585 [2024-11-25 23:35:59.698804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.585 [2024-11-25 23:35:59.698811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:27.585 [2024-11-25 23:35:59.698818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:27.585 [2024-11-25 23:35:59.698824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.585 [2024-11-25 23:35:59.698870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.585 [2024-11-25 23:35:59.698877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:27.585 [2024-11-25 23:35:59.698883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:34:27.585 [2024-11-25 23:35:59.698889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.585 [2024-11-25 23:35:59.698900] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:27.585 [2024-11-25 23:35:59.698910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.698997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:27.585 [2024-11-25 23:35:59.699262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:27.586 [2024-11-25 23:35:59.699508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:27.586 [2024-11-25 23:35:59.699514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c08bd9-98e2-4ce8-9a59-5099f5a41b5c 00:34:27.586 [2024-11-25 23:35:59.699520] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:27.586 [2024-11-25 23:35:59.699526] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:27.586 [2024-11-25 23:35:59.699531] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:27.586 [2024-11-25 23:35:59.699541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:27.586 [2024-11-25 23:35:59.699547] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:27.586 [2024-11-25 23:35:59.699553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:27.586 [2024-11-25 23:35:59.699562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:27.586 [2024-11-25 23:35:59.699568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:27.586 [2024-11-25 23:35:59.699572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:27.586 [2024-11-25 23:35:59.699578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.586 [2024-11-25 23:35:59.699584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:27.586 [2024-11-25 23:35:59.699590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:34:27.586 [2024-11-25 23:35:59.699596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.586 [2024-11-25 23:35:59.709585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.586 [2024-11-25 23:35:59.709611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:27.586 [2024-11-25 23:35:59.709619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.975 ms 00:34:27.586 [2024-11-25 23:35:59.709626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.586 [2024-11-25 23:35:59.709913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.586 [2024-11-25 23:35:59.709924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:27.586 [2024-11-25 23:35:59.709931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:34:27.586 [2024-11-25 23:35:59.709937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.586 [2024-11-25 23:35:59.737500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.586 [2024-11-25 23:35:59.737525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:27.586 [2024-11-25 23:35:59.737533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.586 [2024-11-25 23:35:59.737539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.586 [2024-11-25 23:35:59.737585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.586 [2024-11-25 23:35:59.737592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:27.586 [2024-11-25 23:35:59.737599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.586 [2024-11-25 23:35:59.737605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.586 [2024-11-25 23:35:59.737645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.737652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:27.587 [2024-11-25 23:35:59.737662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.737668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.737680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.737686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:27.587 [2024-11-25 23:35:59.737695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.737701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.800739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.800773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:27.587 [2024-11-25 23:35:59.800783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.800790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:27.587 [2024-11-25 23:35:59.852384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:27.587 [2024-11-25 23:35:59.852471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:27.587 [2024-11-25 23:35:59.852523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:27.587 [2024-11-25 23:35:59.852610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:27.587 [2024-11-25 23:35:59.852653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:27.587 [2024-11-25 23:35:59.852705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.587 [2024-11-25 23:35:59.852757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:27.587 [2024-11-25 23:35:59.852764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.587 [2024-11-25 23:35:59.852770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.587 [2024-11-25 23:35:59.852877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.957 ms, result 0 00:34:28.158 00:34:28.158 00:34:28.419 23:36:00 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:34:28.419 [2024-11-25 23:36:00.597269] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:34:28.419 [2024-11-25 23:36:00.597389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86069 ] 00:34:28.419 [2024-11-25 23:36:00.750545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.680 [2024-11-25 23:36:00.860783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.940 [2024-11-25 23:36:01.088202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:28.940 [2024-11-25 23:36:01.088252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:28.940 [2024-11-25 23:36:01.243976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.940 [2024-11-25 23:36:01.244013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:28.940 [2024-11-25 23:36:01.244026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:28.940 [2024-11-25 23:36:01.244033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.940 [2024-11-25 23:36:01.244079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.940 [2024-11-25 23:36:01.244091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:28.940 [2024-11-25 23:36:01.244098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:28.940 [2024-11-25 23:36:01.244105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.940 [2024-11-25 23:36:01.244119] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:28.940 [2024-11-25 23:36:01.244660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:28.940 [2024-11-25 23:36:01.244673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.940 [2024-11-25 23:36:01.244679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:28.940 [2024-11-25 23:36:01.244686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:34:28.940 [2024-11-25 23:36:01.244692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.940 [2024-11-25 23:36:01.244889] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:28.940 [2024-11-25 23:36:01.244909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.940 [2024-11-25 23:36:01.244918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:28.940 [2024-11-25 23:36:01.244925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:28.940 [2024-11-25 23:36:01.244931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.940 [2024-11-25 23:36:01.245001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.940 [2024-11-25 23:36:01.245010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:28.940 [2024-11-25 23:36:01.245017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:28.940 [2024-11-25 23:36:01.245022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.940 [2024-11-25 23:36:01.245246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.940 [2024-11-25 23:36:01.245257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:28.940 [2024-11-25 23:36:01.245264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:34:28.941 [2024-11-25 23:36:01.245270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.245322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.245329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:28.941 [2024-11-25 23:36:01.245336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:28.941 [2024-11-25 23:36:01.245343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.245359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.245365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:28.941 [2024-11-25 23:36:01.245373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:28.941 [2024-11-25 23:36:01.245379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.245392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:28.941 [2024-11-25 23:36:01.248614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.248635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:28.941 [2024-11-25 23:36:01.248643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:34:28.941 [2024-11-25 23:36:01.248649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.248676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.248683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:28.941 [2024-11-25 23:36:01.248689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:28.941 [2024-11-25 23:36:01.248696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.248729] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:28.941 [2024-11-25 23:36:01.248747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:28.941 [2024-11-25 23:36:01.248778] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:28.941 [2024-11-25 23:36:01.248792] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:28.941 [2024-11-25 23:36:01.248872] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:28.941 [2024-11-25 23:36:01.248880] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:28.941 [2024-11-25 23:36:01.248888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:28.941 [2024-11-25 23:36:01.248896] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:28.941 [2024-11-25 23:36:01.248903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:28.941 [2024-11-25 23:36:01.248911] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:28.941 [2024-11-25 23:36:01.248917] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:28.941 [2024-11-25 23:36:01.248922] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:28.941 [2024-11-25 23:36:01.248928] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:28.941 [2024-11-25 23:36:01.248934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.248940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:28.941 [2024-11-25 23:36:01.248946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:34:28.941 [2024-11-25 23:36:01.248951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.249033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.249040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:28.941 [2024-11-25 23:36:01.249046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:28.941 [2024-11-25 23:36:01.249068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.249148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:28.941 [2024-11-25 23:36:01.249156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:28.941 [2024-11-25 23:36:01.249163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:28.941 [2024-11-25 23:36:01.249183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:28.941 [2024-11-25 23:36:01.249199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:28.941 [2024-11-25 23:36:01.249209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:28.941 [2024-11-25 23:36:01.249214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:28.941 [2024-11-25 23:36:01.249219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:28.941 [2024-11-25 23:36:01.249224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:28.941 [2024-11-25 23:36:01.249230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:28.941 [2024-11-25 23:36:01.249240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:28.941 [2024-11-25 23:36:01.249250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:28.941 [2024-11-25 23:36:01.249267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:28.941 [2024-11-25 23:36:01.249284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:28.941 [2024-11-25 23:36:01.249299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:28.941 [2024-11-25 23:36:01.249315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:28.941 [2024-11-25 23:36:01.249330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:28.941 [2024-11-25 23:36:01.249339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:28.941 [2024-11-25 23:36:01.249344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:28.941 [2024-11-25 23:36:01.249350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:28.941 [2024-11-25 23:36:01.249356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:28.941 [2024-11-25 23:36:01.249361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:28.941 [2024-11-25 23:36:01.249366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:28.941 [2024-11-25 23:36:01.249376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:28.941 [2024-11-25 23:36:01.249381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249386] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:28.941 [2024-11-25 23:36:01.249392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:28.941 [2024-11-25 23:36:01.249397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:28.941 [2024-11-25 23:36:01.249410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:28.941 [2024-11-25 23:36:01.249415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:28.941 [2024-11-25 23:36:01.249420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:28.941 [2024-11-25 23:36:01.249425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:28.941 [2024-11-25 23:36:01.249430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:28.941 [2024-11-25 23:36:01.249435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:28.941 [2024-11-25 23:36:01.249441] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:28.941 [2024-11-25 23:36:01.249448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:28.941 [2024-11-25 23:36:01.249460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:28.941 [2024-11-25 23:36:01.249465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:28.941 [2024-11-25 23:36:01.249471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:28.941 [2024-11-25 23:36:01.249476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:28.941 [2024-11-25 23:36:01.249482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:28.941 [2024-11-25 23:36:01.249487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:28.941 [2024-11-25 23:36:01.249493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:28.941 [2024-11-25 23:36:01.249498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:28.941 [2024-11-25 23:36:01.249504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:28.941 [2024-11-25 23:36:01.249532] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:28.941 [2024-11-25 23:36:01.249539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:28.941 [2024-11-25 23:36:01.249551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:28.941 [2024-11-25 23:36:01.249556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:28.941 [2024-11-25 23:36:01.249561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:28.941 [2024-11-25 23:36:01.249567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.249573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:28.941 [2024-11-25 23:36:01.249579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:34:28.941 [2024-11-25 23:36:01.249585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.270485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.270508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:28.941 [2024-11-25 23:36:01.270516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.870 ms 00:34:28.941 [2024-11-25 23:36:01.270522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:28.941 [2024-11-25 23:36:01.270584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:28.941 [2024-11-25 23:36:01.270591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:28.941 [2024-11-25 23:36:01.270599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:28.941 [2024-11-25 23:36:01.270606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.313357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.313386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:29.202 [2024-11-25 23:36:01.313395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.711 ms 00:34:29.202 [2024-11-25 23:36:01.313402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.313435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.313443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:29.202 [2024-11-25 23:36:01.313450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:29.202 [2024-11-25 23:36:01.313456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.313532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.313540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:29.202 [2024-11-25 23:36:01.313548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:29.202 [2024-11-25 23:36:01.313553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.313650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.313659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:29.202 [2024-11-25 23:36:01.313665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:34:29.202 [2024-11-25 23:36:01.313671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.325579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.325602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:29.202 [2024-11-25 23:36:01.325610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.894 ms 00:34:29.202 [2024-11-25 23:36:01.325616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.325708] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:29.202 [2024-11-25 23:36:01.325718] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:29.202 [2024-11-25 23:36:01.325725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.325734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:29.202 [2024-11-25 23:36:01.325740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:29.202 [2024-11-25 23:36:01.325746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.334881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.334902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:29.202 [2024-11-25 23:36:01.334910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.124 ms 00:34:29.202 [2024-11-25 23:36:01.334916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.335010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.335018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:29.202 [2024-11-25 23:36:01.335024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:34:29.202 [2024-11-25 23:36:01.335033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.335066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.335074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:29.202 [2024-11-25 23:36:01.335087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:29.202 [2024-11-25 23:36:01.335093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.335530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.335545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:29.202 [2024-11-25 23:36:01.335552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:34:29.202 [2024-11-25 23:36:01.335559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.335573] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:29.202 [2024-11-25 23:36:01.335581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.335588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:29.202 [2024-11-25 23:36:01.335594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:29.202 [2024-11-25 23:36:01.335600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.345104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:29.202 [2024-11-25 23:36:01.345213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.345222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:29.202 [2024-11-25 23:36:01.345229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.599 ms 00:34:29.202 [2024-11-25 23:36:01.345235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.202 [2024-11-25 23:36:01.346825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.202 [2024-11-25 23:36:01.346844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:29.202 [2024-11-25 23:36:01.346851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:34:29.203 [2024-11-25 23:36:01.346857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.203 [2024-11-25 23:36:01.346935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.203 [2024-11-25 23:36:01.346943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:29.203 [2024-11-25 23:36:01.346950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:29.203 [2024-11-25 23:36:01.346956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.203 [2024-11-25 23:36:01.346974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.203 [2024-11-25 23:36:01.346984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:29.203 [2024-11-25 23:36:01.346991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:29.203 [2024-11-25 23:36:01.346997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.203 [2024-11-25 23:36:01.347021] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:29.203 [2024-11-25 23:36:01.347029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.203 [2024-11-25 23:36:01.347035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:29.203 [2024-11-25 23:36:01.347041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:29.203 [2024-11-25 23:36:01.347047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.203 [2024-11-25 23:36:01.366358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.203 [2024-11-25 23:36:01.366383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:29.203 [2024-11-25 23:36:01.366392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.283 ms 00:34:29.203 [2024-11-25 23:36:01.366399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.203 [2024-11-25 23:36:01.366459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.203 [2024-11-25 23:36:01.366467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:29.203 [2024-11-25 23:36:01.366474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:29.203 [2024-11-25 23:36:01.366480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.203 [2024-11-25 23:36:01.367297] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.953 ms, result 0 00:34:30.587  [2024-11-25T23:36:03.528Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-25T23:36:04.914Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-25T23:36:05.856Z] Copying: 35/1024 [MB] (11 MBps) [2024-11-25T23:36:06.799Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-25T23:36:07.743Z] Copying: 57/1024 [MB] (11 MBps) [2024-11-25T23:36:08.686Z] Copying: 68/1024 [MB] (10 MBps) [2024-11-25T23:36:09.628Z] Copying: 80/1024 [MB] (11 MBps) [2024-11-25T23:36:10.570Z] Copying: 92/1024 [MB] (11 MBps) [2024-11-25T23:36:11.571Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-25T23:36:12.549Z] Copying: 115/1024 [MB] (12 MBps) [2024-11-25T23:36:13.934Z] Copying: 127/1024 [MB] (11 MBps) [2024-11-25T23:36:14.875Z] Copying: 138/1024 [MB] (11 MBps) [2024-11-25T23:36:15.816Z] Copying: 150/1024 [MB] (11 MBps) [2024-11-25T23:36:16.757Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-25T23:36:17.700Z] Copying: 173/1024 [MB] (11 MBps) [2024-11-25T23:36:18.640Z] Copying: 185/1024 [MB] (11 MBps) [2024-11-25T23:36:19.581Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-25T23:36:20.523Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-25T23:36:21.906Z] Copying: 219/1024 [MB] (11 MBps) [2024-11-25T23:36:22.847Z] Copying: 231/1024 [MB] (11 MBps) [2024-11-25T23:36:23.787Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-25T23:36:24.730Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-25T23:36:25.674Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-25T23:36:26.615Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-25T23:36:27.558Z] Copying: 288/1024 [MB] (12 MBps) [2024-11-25T23:36:28.943Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-25T23:36:29.514Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-25T23:36:30.903Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-25T23:36:31.847Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-25T23:36:32.790Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-25T23:36:33.731Z] Copying: 358/1024 [MB] (14 MBps) [2024-11-25T23:36:34.674Z] Copying: 369/1024 [MB] (11 MBps) [2024-11-25T23:36:35.619Z] Copying: 380/1024 [MB] (11 MBps) [2024-11-25T23:36:36.563Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-25T23:36:37.975Z] Copying: 402/1024 [MB] (10 MBps) [2024-11-25T23:36:38.548Z] Copying: 413/1024 [MB] (11 MBps) [2024-11-25T23:36:39.932Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-25T23:36:40.879Z] Copying: 437/1024 [MB] (11 MBps) [2024-11-25T23:36:41.823Z] Copying: 448/1024 [MB] (11 MBps) [2024-11-25T23:36:42.767Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-25T23:36:43.713Z] Copying: 471/1024 [MB] (11 MBps) [2024-11-25T23:36:44.703Z] Copying: 484/1024 [MB] (12 MBps) [2024-11-25T23:36:45.647Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-25T23:36:46.592Z] Copying: 509/1024 [MB] (13 MBps) [2024-11-25T23:36:47.537Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-25T23:36:48.927Z] Copying: 534/1024 [MB] (14 MBps) [2024-11-25T23:36:49.871Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-25T23:36:50.816Z] Copying: 557/1024 [MB] (11 MBps) [2024-11-25T23:36:51.760Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-25T23:36:52.705Z] Copying: 580/1024 [MB] (11 MBps) [2024-11-25T23:36:53.648Z] Copying: 591/1024 [MB] (11 MBps) [2024-11-25T23:36:54.590Z] Copying: 603/1024 [MB] (11 MBps) [2024-11-25T23:36:55.536Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-25T23:36:56.922Z] Copying: 626/1024 [MB] (11 MBps) [2024-11-25T23:36:57.866Z] Copying: 638/1024 [MB] (11 MBps) [2024-11-25T23:36:58.811Z] Copying: 650/1024 [MB] (11 MBps) [2024-11-25T23:36:59.751Z] Copying: 661/1024 [MB] (11 MBps) [2024-11-25T23:37:00.696Z] Copying: 673/1024 [MB] (11 MBps) [2024-11-25T23:37:01.675Z] Copying: 684/1024 [MB] (11 MBps) [2024-11-25T23:37:02.619Z] Copying: 695/1024 [MB] (10 MBps) [2024-11-25T23:37:03.560Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-25T23:37:04.947Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-25T23:37:05.520Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-25T23:37:06.903Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-25T23:37:07.849Z] Copying: 753/1024 [MB] (16 MBps) [2024-11-25T23:37:08.793Z] Copying: 774/1024 [MB] (20 MBps) [2024-11-25T23:37:09.737Z] Copying: 794/1024 [MB] (20 MBps) [2024-11-25T23:37:10.681Z] Copying: 811/1024 [MB] (17 MBps) [2024-11-25T23:37:11.625Z] Copying: 831/1024 [MB] (19 MBps) [2024-11-25T23:37:12.568Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-25T23:37:13.954Z] Copying: 861/1024 [MB] (18 MBps) [2024-11-25T23:37:14.526Z] Copying: 878/1024 [MB] (16 MBps) [2024-11-25T23:37:15.562Z] Copying: 895/1024 [MB] (16 MBps) [2024-11-25T23:37:16.953Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-25T23:37:17.525Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-25T23:37:18.910Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-25T23:37:19.855Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-25T23:37:20.799Z] Copying: 951/1024 [MB] (13 MBps) [2024-11-25T23:37:21.742Z] Copying: 962/1024 [MB] (10 MBps) [2024-11-25T23:37:22.686Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-25T23:37:23.646Z] Copying: 990/1024 [MB] (17 MBps) [2024-11-25T23:37:24.218Z] Copying: 1012/1024 [MB] (21 MBps) [2024-11-25T23:37:24.480Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-25 23:37:24.220484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.111 [2024-11-25 23:37:24.220587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:52.111 [2024-11-25 23:37:24.220610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:52.111 [2024-11-25 23:37:24.220623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.111 [2024-11-25 23:37:24.220658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:52.111 [2024-11-25 23:37:24.225096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.111 [2024-11-25 23:37:24.225157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:52.111 [2024-11-25 23:37:24.225173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.399 ms 00:35:52.111 [2024-11-25 23:37:24.225184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.111 [2024-11-25 23:37:24.225519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.111 [2024-11-25 23:37:24.225535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:52.111 [2024-11-25 23:37:24.225548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:35:52.111 [2024-11-25 23:37:24.225560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.111 [2024-11-25 23:37:24.225605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.111 [2024-11-25 23:37:24.225619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:52.111 [2024-11-25 23:37:24.225632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:52.111 [2024-11-25 23:37:24.225643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.111 [2024-11-25 23:37:24.225723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.111 [2024-11-25 23:37:24.225737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:52.111 [2024-11-25 23:37:24.225748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:52.111 [2024-11-25 23:37:24.225759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.111 [2024-11-25 23:37:24.225779] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:52.111 [2024-11-25 23:37:24.225797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.225994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:52.111 [2024-11-25 23:37:24.226201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:52.112 [2024-11-25 23:37:24.226937] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:52.112 [2024-11-25 23:37:24.226951] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c08bd9-98e2-4ce8-9a59-5099f5a41b5c 00:35:52.112 [2024-11-25 23:37:24.226963] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:52.112 [2024-11-25 23:37:24.226974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:35:52.112 [2024-11-25 23:37:24.226984] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:52.112 [2024-11-25 23:37:24.226995] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:52.112 [2024-11-25 23:37:24.227007] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:52.112 [2024-11-25 23:37:24.227018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:52.112 [2024-11-25 23:37:24.227028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:52.112 [2024-11-25 23:37:24.227038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:52.112 [2024-11-25 23:37:24.227046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:52.112 [2024-11-25 23:37:24.227069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.112 [2024-11-25 23:37:24.227081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:52.112 [2024-11-25 23:37:24.227092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:35:52.112 [2024-11-25 23:37:24.227102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.112 [2024-11-25 23:37:24.242818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.112 [2024-11-25 23:37:24.242869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:52.112 [2024-11-25 23:37:24.242882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.691 ms 00:35:52.112 [2024-11-25 23:37:24.242891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.112 [2024-11-25 23:37:24.243313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.112 [2024-11-25 23:37:24.243335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:52.112 [2024-11-25 23:37:24.243354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:35:52.112 [2024-11-25 23:37:24.243362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.112 [2024-11-25 23:37:24.280393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.112 [2024-11-25 23:37:24.280446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:52.112 [2024-11-25 23:37:24.280457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.112 [2024-11-25 23:37:24.280465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.112 [2024-11-25 23:37:24.280540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.112 [2024-11-25 23:37:24.280549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:52.112 [2024-11-25 23:37:24.280564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.112 [2024-11-25 23:37:24.280573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.280637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.280648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:52.113 [2024-11-25 23:37:24.280657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.280665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.280682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.280691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:52.113 [2024-11-25 23:37:24.280698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.280710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.366998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.367053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:52.113 [2024-11-25 23:37:24.367088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.367096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:52.113 [2024-11-25 23:37:24.438236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:52.113 [2024-11-25 23:37:24.438356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:52.113 [2024-11-25 23:37:24.438428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:52.113 [2024-11-25 23:37:24.438537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:52.113 [2024-11-25 23:37:24.438589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:52.113 [2024-11-25 23:37:24.438660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:52.113 [2024-11-25 23:37:24.438726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:52.113 [2024-11-25 23:37:24.438735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:52.113 [2024-11-25 23:37:24.438742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.113 [2024-11-25 23:37:24.438877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 218.366 ms, result 0 00:35:53.055 00:35:53.055 00:35:53.055 23:37:25 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:55.599 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:55.599 23:37:27 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:35:55.599 [2024-11-25 23:37:27.421533] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:35:55.599 [2024-11-25 23:37:27.421664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86933 ] 00:35:55.599 [2024-11-25 23:37:27.583559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.599 [2024-11-25 23:37:27.694945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.861 [2024-11-25 23:37:27.971549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:55.861 [2024-11-25 23:37:27.971616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:55.861 [2024-11-25 23:37:28.133397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.133468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:55.861 [2024-11-25 23:37:28.133484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:55.861 [2024-11-25 23:37:28.133494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.133547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.133559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:55.861 [2024-11-25 23:37:28.133569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:55.861 [2024-11-25 23:37:28.133577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.133598] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:55.861 [2024-11-25 23:37:28.134326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:55.861 [2024-11-25 23:37:28.134354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.134363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:55.861 [2024-11-25 23:37:28.134373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:35:55.861 [2024-11-25 23:37:28.134381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.134986] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:55.861 [2024-11-25 23:37:28.135087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.135106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:55.861 [2024-11-25 23:37:28.135119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:35:55.861 [2024-11-25 23:37:28.135128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.135233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.135247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:55.861 [2024-11-25 23:37:28.135256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:55.861 [2024-11-25 23:37:28.135264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.135617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.135641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:55.861 [2024-11-25 23:37:28.135650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:35:55.861 [2024-11-25 23:37:28.135658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.135736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.135747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:55.861 [2024-11-25 23:37:28.135756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:35:55.861 [2024-11-25 23:37:28.135764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.135787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.135800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:55.861 [2024-11-25 23:37:28.135808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:55.861 [2024-11-25 23:37:28.135816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.135839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:55.861 [2024-11-25 23:37:28.140253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.140300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:55.861 [2024-11-25 23:37:28.140311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.421 ms 00:35:55.861 [2024-11-25 23:37:28.140319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.140356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.140364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:55.861 [2024-11-25 23:37:28.140372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:55.861 [2024-11-25 23:37:28.140380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.140444] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:55.861 [2024-11-25 23:37:28.140474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:55.861 [2024-11-25 23:37:28.140511] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:55.861 [2024-11-25 23:37:28.140527] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:55.861 [2024-11-25 23:37:28.140634] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:55.861 [2024-11-25 23:37:28.140645] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:55.861 [2024-11-25 23:37:28.140656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:55.861 [2024-11-25 23:37:28.140668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:55.861 [2024-11-25 23:37:28.140680] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:55.861 [2024-11-25 23:37:28.140689] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:55.861 [2024-11-25 23:37:28.140696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:55.861 [2024-11-25 23:37:28.140704] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:55.861 [2024-11-25 23:37:28.140711] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:55.861 [2024-11-25 23:37:28.140719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.140727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:55.861 [2024-11-25 23:37:28.140737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:35:55.861 [2024-11-25 23:37:28.140744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.140830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.861 [2024-11-25 23:37:28.140840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:55.861 [2024-11-25 23:37:28.140851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:55.861 [2024-11-25 23:37:28.140859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.861 [2024-11-25 23:37:28.140995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:55.861 [2024-11-25 23:37:28.141018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:55.861 [2024-11-25 23:37:28.141027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:55.861 [2024-11-25 23:37:28.141036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.861 [2024-11-25 23:37:28.141045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:55.861 [2024-11-25 23:37:28.141052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:55.861 [2024-11-25 23:37:28.141079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:55.861 [2024-11-25 23:37:28.141087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:55.861 [2024-11-25 23:37:28.141095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:55.861 [2024-11-25 23:37:28.141104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:55.861 [2024-11-25 23:37:28.141111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:55.861 [2024-11-25 23:37:28.141120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:55.862 [2024-11-25 23:37:28.141127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:55.862 [2024-11-25 23:37:28.141136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:55.862 [2024-11-25 23:37:28.141143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:55.862 [2024-11-25 23:37:28.141157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:55.862 [2024-11-25 23:37:28.141172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:55.862 [2024-11-25 23:37:28.141193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:55.862 [2024-11-25 23:37:28.141213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:55.862 [2024-11-25 23:37:28.141233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:55.862 [2024-11-25 23:37:28.141253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:55.862 [2024-11-25 23:37:28.141276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:55.862 [2024-11-25 23:37:28.141289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:55.862 [2024-11-25 23:37:28.141296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:55.862 [2024-11-25 23:37:28.141302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:55.862 [2024-11-25 23:37:28.141309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:55.862 [2024-11-25 23:37:28.141315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:55.862 [2024-11-25 23:37:28.141322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:55.862 [2024-11-25 23:37:28.141337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:55.862 [2024-11-25 23:37:28.141345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141353] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:55.862 [2024-11-25 23:37:28.141362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:55.862 [2024-11-25 23:37:28.141369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:55.862 [2024-11-25 23:37:28.141388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:55.862 [2024-11-25 23:37:28.141395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:55.862 [2024-11-25 23:37:28.141401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:55.862 [2024-11-25 23:37:28.141408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:55.862 [2024-11-25 23:37:28.141415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:55.862 [2024-11-25 23:37:28.141422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:55.862 [2024-11-25 23:37:28.141430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:55.862 [2024-11-25 23:37:28.141441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:55.862 [2024-11-25 23:37:28.141456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:55.862 [2024-11-25 23:37:28.141463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:55.862 [2024-11-25 23:37:28.141469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:55.862 [2024-11-25 23:37:28.141476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:55.862 [2024-11-25 23:37:28.141483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:55.862 [2024-11-25 23:37:28.141489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:55.862 [2024-11-25 23:37:28.141497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:55.862 [2024-11-25 23:37:28.141505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:55.862 [2024-11-25 23:37:28.141513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:55.862 [2024-11-25 23:37:28.141557] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:55.862 [2024-11-25 23:37:28.141565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:55.862 [2024-11-25 23:37:28.141580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:55.862 [2024-11-25 23:37:28.141588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:55.862 [2024-11-25 23:37:28.141596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:55.862 [2024-11-25 23:37:28.141605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.862 [2024-11-25 23:37:28.141616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:55.862 [2024-11-25 23:37:28.141625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:35:55.862 [2024-11-25 23:37:28.141633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.862 [2024-11-25 23:37:28.169523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.862 [2024-11-25 23:37:28.169571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:55.862 [2024-11-25 23:37:28.169583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.845 ms 00:35:55.862 [2024-11-25 23:37:28.169592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.862 [2024-11-25 23:37:28.169683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.862 [2024-11-25 23:37:28.169696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:55.862 [2024-11-25 23:37:28.169707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:55.862 [2024-11-25 23:37:28.169716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.862 [2024-11-25 23:37:28.217542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.862 [2024-11-25 23:37:28.217596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:55.863 [2024-11-25 23:37:28.217609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.769 ms 00:35:55.863 [2024-11-25 23:37:28.217621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.863 [2024-11-25 23:37:28.217669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.863 [2024-11-25 23:37:28.217679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:55.863 [2024-11-25 23:37:28.217689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:55.863 [2024-11-25 23:37:28.217698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.863 [2024-11-25 23:37:28.217809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.863 [2024-11-25 23:37:28.217821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:55.863 [2024-11-25 23:37:28.217830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:35:55.863 [2024-11-25 23:37:28.217838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:55.863 [2024-11-25 23:37:28.217967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:55.863 [2024-11-25 23:37:28.217978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:55.863 [2024-11-25 23:37:28.217987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:35:55.863 [2024-11-25 23:37:28.217995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.234008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.234070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:56.123 [2024-11-25 23:37:28.234082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.992 ms 00:35:56.123 [2024-11-25 23:37:28.234090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.234243] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:56.123 [2024-11-25 23:37:28.234260] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:56.123 [2024-11-25 23:37:28.234272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.234281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:56.123 [2024-11-25 23:37:28.234289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:35:56.123 [2024-11-25 23:37:28.234296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.246597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.246655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:56.123 [2024-11-25 23:37:28.246667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.281 ms 00:35:56.123 [2024-11-25 23:37:28.246674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.246798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.246807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:56.123 [2024-11-25 23:37:28.246821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:35:56.123 [2024-11-25 23:37:28.246831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.246884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.246895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:56.123 [2024-11-25 23:37:28.246912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:56.123 [2024-11-25 23:37:28.246920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.247536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.247560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:56.123 [2024-11-25 23:37:28.247569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:35:56.123 [2024-11-25 23:37:28.247581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.247598] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:56.123 [2024-11-25 23:37:28.247609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.247617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:56.123 [2024-11-25 23:37:28.247625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:56.123 [2024-11-25 23:37:28.247633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.260600] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:56.123 [2024-11-25 23:37:28.260757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.260769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:56.123 [2024-11-25 23:37:28.260780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.107 ms 00:35:56.123 [2024-11-25 23:37:28.260787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.262982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.263016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:56.123 [2024-11-25 23:37:28.263026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:35:56.123 [2024-11-25 23:37:28.263034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.263148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.263160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:56.123 [2024-11-25 23:37:28.263170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:35:56.123 [2024-11-25 23:37:28.263179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.263208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.263217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:56.123 [2024-11-25 23:37:28.263226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:56.123 [2024-11-25 23:37:28.263234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.263265] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:56.123 [2024-11-25 23:37:28.263277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.263285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:56.123 [2024-11-25 23:37:28.263293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:56.123 [2024-11-25 23:37:28.263301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.290313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.290364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:56.123 [2024-11-25 23:37:28.290377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.991 ms 00:35:56.123 [2024-11-25 23:37:28.290386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.290476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.123 [2024-11-25 23:37:28.290487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:56.123 [2024-11-25 23:37:28.290495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:35:56.123 [2024-11-25 23:37:28.290503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.123 [2024-11-25 23:37:28.291730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.874 ms, result 0 00:35:57.063  [2024-11-25T23:37:30.374Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-25T23:37:31.316Z] Copying: 39/1024 [MB] (19 MBps) [2024-11-25T23:37:32.704Z] Copying: 52/1024 [MB] (13 MBps) [2024-11-25T23:37:33.649Z] Copying: 70/1024 [MB] (17 MBps) [2024-11-25T23:37:34.588Z] Copying: 80/1024 [MB] (10 MBps) [2024-11-25T23:37:35.522Z] Copying: 98/1024 [MB] (17 MBps) [2024-11-25T23:37:36.457Z] Copying: 142/1024 [MB] (43 MBps) [2024-11-25T23:37:37.390Z] Copying: 178/1024 [MB] (36 MBps) [2024-11-25T23:37:38.324Z] Copying: 211/1024 [MB] (33 MBps) [2024-11-25T23:37:39.704Z] Copying: 261/1024 [MB] (49 MBps) [2024-11-25T23:37:40.649Z] Copying: 299/1024 [MB] (37 MBps) [2024-11-25T23:37:41.590Z] Copying: 309/1024 [MB] (10 MBps) [2024-11-25T23:37:42.535Z] Copying: 327/1024 [MB] (18 MBps) [2024-11-25T23:37:43.479Z] Copying: 350/1024 [MB] (22 MBps) [2024-11-25T23:37:44.423Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-25T23:37:45.368Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-25T23:37:46.314Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-25T23:37:47.314Z] Copying: 400/1024 [MB] (13 MBps) [2024-11-25T23:37:48.697Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-25T23:37:49.628Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-25T23:37:50.564Z] Copying: 453/1024 [MB] (32 MBps) [2024-11-25T23:37:51.508Z] Copying: 483/1024 [MB] (30 MBps) [2024-11-25T23:37:52.453Z] Copying: 494/1024 [MB] (10 MBps) [2024-11-25T23:37:53.397Z] Copying: 510/1024 [MB] (15 MBps) [2024-11-25T23:37:54.343Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-25T23:37:55.727Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-25T23:37:56.667Z] Copying: 545/1024 [MB] (12 MBps) [2024-11-25T23:37:57.601Z] Copying: 568536/1048576 [kB] (10232 kBps) [2024-11-25T23:37:58.536Z] Copying: 575/1024 [MB] (20 MBps) [2024-11-25T23:37:59.480Z] Copying: 598/1024 [MB] (22 MBps) [2024-11-25T23:38:00.418Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-25T23:38:01.358Z] Copying: 625/1024 [MB] (16 MBps) [2024-11-25T23:38:02.739Z] Copying: 646/1024 [MB] (20 MBps) [2024-11-25T23:38:03.675Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-25T23:38:04.618Z] Copying: 681/1024 [MB] (24 MBps) [2024-11-25T23:38:05.563Z] Copying: 691/1024 [MB] (10 MBps) [2024-11-25T23:38:06.506Z] Copying: 701/1024 [MB] (10 MBps) [2024-11-25T23:38:07.440Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-25T23:38:08.376Z] Copying: 735/1024 [MB] (22 MBps) [2024-11-25T23:38:09.321Z] Copying: 755/1024 [MB] (20 MBps) [2024-11-25T23:38:10.705Z] Copying: 766/1024 [MB] (11 MBps) [2024-11-25T23:38:11.638Z] Copying: 783/1024 [MB] (17 MBps) [2024-11-25T23:38:12.575Z] Copying: 811/1024 [MB] (27 MBps) [2024-11-25T23:38:13.521Z] Copying: 843/1024 [MB] (31 MBps) [2024-11-25T23:38:14.466Z] Copying: 858/1024 [MB] (14 MBps) [2024-11-25T23:38:15.410Z] Copying: 868/1024 [MB] (10 MBps) [2024-11-25T23:38:16.356Z] Copying: 888/1024 [MB] (20 MBps) [2024-11-25T23:38:17.745Z] Copying: 902/1024 [MB] (13 MBps) [2024-11-25T23:38:18.322Z] Copying: 912/1024 [MB] (10 MBps) [2024-11-25T23:38:19.346Z] Copying: 927/1024 [MB] (15 MBps) [2024-11-25T23:38:20.732Z] Copying: 947/1024 [MB] (19 MBps) [2024-11-25T23:38:21.668Z] Copying: 963/1024 [MB] (16 MBps) [2024-11-25T23:38:22.606Z] Copying: 981/1024 [MB] (17 MBps) [2024-11-25T23:38:23.550Z] Copying: 1015/1024 [MB] (34 MBps) [2024-11-25T23:38:24.125Z] Copying: 1048112/1048576 [kB] (8200 kBps) [2024-11-25T23:38:24.125Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-25 23:38:23.831714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.756 [2024-11-25 23:38:23.831795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:51.756 [2024-11-25 23:38:23.831813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:51.756 [2024-11-25 23:38:23.831823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.756 [2024-11-25 23:38:23.834042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:51.756 [2024-11-25 23:38:23.838769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.756 [2024-11-25 23:38:23.838817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:51.756 [2024-11-25 23:38:23.838829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:36:51.756 [2024-11-25 23:38:23.838845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.756 [2024-11-25 23:38:23.851503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.756 [2024-11-25 23:38:23.851554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:51.756 [2024-11-25 23:38:23.851566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.916 ms 00:36:51.756 [2024-11-25 23:38:23.851576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.756 [2024-11-25 23:38:23.851606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.756 [2024-11-25 23:38:23.851617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:51.756 [2024-11-25 23:38:23.851626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:51.756 [2024-11-25 23:38:23.851634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.756 [2024-11-25 23:38:23.851696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.756 [2024-11-25 23:38:23.851710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:51.756 [2024-11-25 23:38:23.851719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:36:51.756 [2024-11-25 23:38:23.851728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.756 [2024-11-25 23:38:23.851744] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:51.756 [2024-11-25 23:38:23.851757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:36:51.756 [2024-11-25 23:38:23.851767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:51.756 [2024-11-25 23:38:23.851776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:51.756 [2024-11-25 23:38:23.851784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.851996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:51.757 [2024-11-25 23:38:23.852541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:51.758 [2024-11-25 23:38:23.852621] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:51.758 [2024-11-25 23:38:23.852629] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c08bd9-98e2-4ce8-9a59-5099f5a41b5c 00:36:51.758 [2024-11-25 23:38:23.852637] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:36:51.758 [2024-11-25 23:38:23.852644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128032 00:36:51.758 [2024-11-25 23:38:23.852651] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:36:51.758 [2024-11-25 23:38:23.852659] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:36:51.758 [2024-11-25 23:38:23.852669] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:51.758 [2024-11-25 23:38:23.852676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:51.758 [2024-11-25 23:38:23.852684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:51.758 [2024-11-25 23:38:23.852690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:51.758 [2024-11-25 23:38:23.852696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:51.758 [2024-11-25 23:38:23.852703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.758 [2024-11-25 23:38:23.852710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:51.758 [2024-11-25 23:38:23.852718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:36:51.758 [2024-11-25 23:38:23.852725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.866506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.758 [2024-11-25 23:38:23.866554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:51.758 [2024-11-25 23:38:23.866572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.765 ms 00:36:51.758 [2024-11-25 23:38:23.866580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.866965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:51.758 [2024-11-25 23:38:23.866984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:51.758 [2024-11-25 23:38:23.866993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:36:51.758 [2024-11-25 23:38:23.867002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.903538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:23.903585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:51.758 [2024-11-25 23:38:23.903597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:23.903607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.903673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:23.903683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:51.758 [2024-11-25 23:38:23.903692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:23.903701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.903757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:23.903774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:51.758 [2024-11-25 23:38:23.903784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:23.903792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.903810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:23.903819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:51.758 [2024-11-25 23:38:23.903827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:23.903834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:23.986831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:23.986895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:51.758 [2024-11-25 23:38:23.986909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:23.986917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.055994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:51.758 [2024-11-25 23:38:24.056085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:51.758 [2024-11-25 23:38:24.056201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:51.758 [2024-11-25 23:38:24.056268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:51.758 [2024-11-25 23:38:24.056383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:51.758 [2024-11-25 23:38:24.056444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:51.758 [2024-11-25 23:38:24.056514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:51.758 [2024-11-25 23:38:24.056584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:51.758 [2024-11-25 23:38:24.056593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:51.758 [2024-11-25 23:38:24.056601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:51.758 [2024-11-25 23:38:24.056732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 226.262 ms, result 0 00:36:53.674 00:36:53.674 00:36:53.674 23:38:25 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:36:53.674 [2024-11-25 23:38:25.706768] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:36:53.674 [2024-11-25 23:38:25.706939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87500 ] 00:36:53.674 [2024-11-25 23:38:25.875090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.674 [2024-11-25 23:38:25.993567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.936 [2024-11-25 23:38:26.281737] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:53.936 [2024-11-25 23:38:26.281822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:54.199 [2024-11-25 23:38:26.443644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.443711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:54.200 [2024-11-25 23:38:26.443726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:54.200 [2024-11-25 23:38:26.443735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.443789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.443803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:54.200 [2024-11-25 23:38:26.443812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:36:54.200 [2024-11-25 23:38:26.443820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.443839] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:54.200 [2024-11-25 23:38:26.444573] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:54.200 [2024-11-25 23:38:26.444601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.444610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:54.200 [2024-11-25 23:38:26.444619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:36:54.200 [2024-11-25 23:38:26.444627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.445229] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:36:54.200 [2024-11-25 23:38:26.445297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.445314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:54.200 [2024-11-25 23:38:26.445325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:36:54.200 [2024-11-25 23:38:26.445333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.445432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.445444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:54.200 [2024-11-25 23:38:26.445453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:36:54.200 [2024-11-25 23:38:26.445461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.445759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.445785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:54.200 [2024-11-25 23:38:26.445794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:36:54.200 [2024-11-25 23:38:26.445802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.445875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.445886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:54.200 [2024-11-25 23:38:26.445894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:36:54.200 [2024-11-25 23:38:26.445902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.445924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.445933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:54.200 [2024-11-25 23:38:26.445943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:54.200 [2024-11-25 23:38:26.445950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.445970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:54.200 [2024-11-25 23:38:26.450388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.450427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:54.200 [2024-11-25 23:38:26.450438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.424 ms 00:36:54.200 [2024-11-25 23:38:26.450445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.450483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.450492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:54.200 [2024-11-25 23:38:26.450500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:36:54.200 [2024-11-25 23:38:26.450507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.450564] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:54.200 [2024-11-25 23:38:26.450588] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:54.200 [2024-11-25 23:38:26.450628] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:54.200 [2024-11-25 23:38:26.450644] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:54.200 [2024-11-25 23:38:26.450748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:54.200 [2024-11-25 23:38:26.450759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:54.200 [2024-11-25 23:38:26.450769] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:54.200 [2024-11-25 23:38:26.450779] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:54.200 [2024-11-25 23:38:26.450788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:54.200 [2024-11-25 23:38:26.450800] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:54.200 [2024-11-25 23:38:26.450808] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:54.200 [2024-11-25 23:38:26.450815] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:54.200 [2024-11-25 23:38:26.450822] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:54.200 [2024-11-25 23:38:26.450830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.450837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:54.200 [2024-11-25 23:38:26.450844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:36:54.200 [2024-11-25 23:38:26.450853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.450935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.200 [2024-11-25 23:38:26.450945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:54.200 [2024-11-25 23:38:26.450952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:36:54.200 [2024-11-25 23:38:26.450962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.200 [2024-11-25 23:38:26.451084] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:54.200 [2024-11-25 23:38:26.451096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:54.200 [2024-11-25 23:38:26.451105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:54.200 [2024-11-25 23:38:26.451129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:54.200 [2024-11-25 23:38:26.451151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:54.200 [2024-11-25 23:38:26.451166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:54.200 [2024-11-25 23:38:26.451174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:54.200 [2024-11-25 23:38:26.451181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:54.200 [2024-11-25 23:38:26.451188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:54.200 [2024-11-25 23:38:26.451195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:54.200 [2024-11-25 23:38:26.451208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:54.200 [2024-11-25 23:38:26.451223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:54.200 [2024-11-25 23:38:26.451244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:54.200 [2024-11-25 23:38:26.451264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:54.200 [2024-11-25 23:38:26.451284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:54.200 [2024-11-25 23:38:26.451306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:54.200 [2024-11-25 23:38:26.451320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:54.200 [2024-11-25 23:38:26.451326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:54.200 [2024-11-25 23:38:26.451333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:54.200 [2024-11-25 23:38:26.451340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:54.200 [2024-11-25 23:38:26.451346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:54.200 [2024-11-25 23:38:26.451354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:54.200 [2024-11-25 23:38:26.451361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:54.200 [2024-11-25 23:38:26.451368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:54.200 [2024-11-25 23:38:26.451374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:54.201 [2024-11-25 23:38:26.451382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:54.201 [2024-11-25 23:38:26.451389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:54.201 [2024-11-25 23:38:26.451397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:54.201 [2024-11-25 23:38:26.451404] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:54.201 [2024-11-25 23:38:26.451412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:54.201 [2024-11-25 23:38:26.451420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:54.201 [2024-11-25 23:38:26.451427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:54.201 [2024-11-25 23:38:26.451438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:54.201 [2024-11-25 23:38:26.451446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:54.201 [2024-11-25 23:38:26.451453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:54.201 [2024-11-25 23:38:26.451459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:54.201 [2024-11-25 23:38:26.451466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:54.201 [2024-11-25 23:38:26.451473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:54.201 [2024-11-25 23:38:26.451481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:54.201 [2024-11-25 23:38:26.451491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:54.201 [2024-11-25 23:38:26.451508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:54.201 [2024-11-25 23:38:26.451515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:54.201 [2024-11-25 23:38:26.451522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:54.201 [2024-11-25 23:38:26.451530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:54.201 [2024-11-25 23:38:26.451537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:54.201 [2024-11-25 23:38:26.451544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:54.201 [2024-11-25 23:38:26.451551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:54.201 [2024-11-25 23:38:26.451558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:54.201 [2024-11-25 23:38:26.451566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:54.201 [2024-11-25 23:38:26.451602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:54.201 [2024-11-25 23:38:26.451611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:54.201 [2024-11-25 23:38:26.451627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:54.201 [2024-11-25 23:38:26.451635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:54.201 [2024-11-25 23:38:26.451646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:54.201 [2024-11-25 23:38:26.451654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.451662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:54.201 [2024-11-25 23:38:26.451670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:36:54.201 [2024-11-25 23:38:26.451678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.478954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.478997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:54.201 [2024-11-25 23:38:26.479009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.235 ms 00:36:54.201 [2024-11-25 23:38:26.479017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.479116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.479126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:54.201 [2024-11-25 23:38:26.479138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:36:54.201 [2024-11-25 23:38:26.479145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.525120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.525175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:54.201 [2024-11-25 23:38:26.525188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.920 ms 00:36:54.201 [2024-11-25 23:38:26.525197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.525247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.525257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:54.201 [2024-11-25 23:38:26.525266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:54.201 [2024-11-25 23:38:26.525274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.525384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.525396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:54.201 [2024-11-25 23:38:26.525405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:36:54.201 [2024-11-25 23:38:26.525413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.525539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.525551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:54.201 [2024-11-25 23:38:26.525559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:36:54.201 [2024-11-25 23:38:26.525567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.541279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.541328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:54.201 [2024-11-25 23:38:26.541341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.694 ms 00:36:54.201 [2024-11-25 23:38:26.541350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.541504] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:36:54.201 [2024-11-25 23:38:26.541519] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:54.201 [2024-11-25 23:38:26.541530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.541542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:54.201 [2024-11-25 23:38:26.541552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:36:54.201 [2024-11-25 23:38:26.541561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.553857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.553901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:54.201 [2024-11-25 23:38:26.553911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.279 ms 00:36:54.201 [2024-11-25 23:38:26.553918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.554045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.554072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:54.201 [2024-11-25 23:38:26.554097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:36:54.201 [2024-11-25 23:38:26.554109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.554159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.554169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:54.201 [2024-11-25 23:38:26.554178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:36:54.201 [2024-11-25 23:38:26.554192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.554765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.554788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:54.201 [2024-11-25 23:38:26.554797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:36:54.201 [2024-11-25 23:38:26.554805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.201 [2024-11-25 23:38:26.554826] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:36:54.201 [2024-11-25 23:38:26.554836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.201 [2024-11-25 23:38:26.554845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:54.201 [2024-11-25 23:38:26.554853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:54.201 [2024-11-25 23:38:26.554860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.463 [2024-11-25 23:38:26.567541] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:54.463 [2024-11-25 23:38:26.567700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.463 [2024-11-25 23:38:26.567711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:54.463 [2024-11-25 23:38:26.567721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.820 ms 00:36:54.463 [2024-11-25 23:38:26.567728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.463 [2024-11-25 23:38:26.570065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.463 [2024-11-25 23:38:26.570101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:54.463 [2024-11-25 23:38:26.570111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:36:54.463 [2024-11-25 23:38:26.570118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.463 [2024-11-25 23:38:26.570192] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:36:54.463 [2024-11-25 23:38:26.570642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.463 [2024-11-25 23:38:26.570659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:54.463 [2024-11-25 23:38:26.570668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:36:54.463 [2024-11-25 23:38:26.570676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.463 [2024-11-25 23:38:26.570706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.463 [2024-11-25 23:38:26.570716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:54.464 [2024-11-25 23:38:26.570724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:54.464 [2024-11-25 23:38:26.570731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.464 [2024-11-25 23:38:26.570771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:54.464 [2024-11-25 23:38:26.570782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.464 [2024-11-25 23:38:26.570790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:54.464 [2024-11-25 23:38:26.570798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:54.464 [2024-11-25 23:38:26.570805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.464 [2024-11-25 23:38:26.597268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.464 [2024-11-25 23:38:26.597320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:54.464 [2024-11-25 23:38:26.597333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.445 ms 00:36:54.464 [2024-11-25 23:38:26.597342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.464 [2024-11-25 23:38:26.597427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:54.464 [2024-11-25 23:38:26.597437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:54.464 [2024-11-25 23:38:26.597447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:36:54.464 [2024-11-25 23:38:26.597455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:54.464 [2024-11-25 23:38:26.598652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.543 ms, result 0 00:36:55.851  [2024-11-25T23:38:29.165Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-25T23:38:30.108Z] Copying: 25/1024 [MB] (13 MBps) [2024-11-25T23:38:31.053Z] Copying: 38/1024 [MB] (12 MBps) [2024-11-25T23:38:32.012Z] Copying: 52/1024 [MB] (14 MBps) [2024-11-25T23:38:32.954Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-25T23:38:33.898Z] Copying: 75/1024 [MB] (13 MBps) [2024-11-25T23:38:34.844Z] Copying: 86/1024 [MB] (10 MBps) [2024-11-25T23:38:36.231Z] Copying: 102/1024 [MB] (16 MBps) [2024-11-25T23:38:36.804Z] Copying: 117/1024 [MB] (15 MBps) [2024-11-25T23:38:38.189Z] Copying: 129/1024 [MB] (12 MBps) [2024-11-25T23:38:39.132Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-25T23:38:40.073Z] Copying: 155/1024 [MB] (14 MBps) [2024-11-25T23:38:41.015Z] Copying: 175/1024 [MB] (19 MBps) [2024-11-25T23:38:41.959Z] Copying: 189/1024 [MB] (13 MBps) [2024-11-25T23:38:42.905Z] Copying: 203/1024 [MB] (14 MBps) [2024-11-25T23:38:43.848Z] Copying: 219/1024 [MB] (15 MBps) [2024-11-25T23:38:45.233Z] Copying: 230/1024 [MB] (10 MBps) [2024-11-25T23:38:45.806Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-25T23:38:47.195Z] Copying: 251/1024 [MB] (10 MBps) [2024-11-25T23:38:48.141Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-25T23:38:49.087Z] Copying: 272/1024 [MB] (10 MBps) [2024-11-25T23:38:50.055Z] Copying: 283/1024 [MB] (10 MBps) [2024-11-25T23:38:51.030Z] Copying: 294/1024 [MB] (11 MBps) [2024-11-25T23:38:51.975Z] Copying: 305/1024 [MB] (10 MBps) [2024-11-25T23:38:52.921Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-25T23:38:53.866Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-25T23:38:54.812Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-25T23:38:56.202Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-25T23:38:57.146Z] Copying: 358/1024 [MB] (10 MBps) [2024-11-25T23:38:58.086Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-25T23:38:59.032Z] Copying: 394/1024 [MB] (24 MBps) [2024-11-25T23:38:59.980Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-25T23:39:00.924Z] Copying: 426/1024 [MB] (14 MBps) [2024-11-25T23:39:01.870Z] Copying: 446/1024 [MB] (20 MBps) [2024-11-25T23:39:02.815Z] Copying: 458/1024 [MB] (11 MBps) [2024-11-25T23:39:04.206Z] Copying: 477/1024 [MB] (18 MBps) [2024-11-25T23:39:05.153Z] Copying: 500/1024 [MB] (22 MBps) [2024-11-25T23:39:06.099Z] Copying: 521/1024 [MB] (21 MBps) [2024-11-25T23:39:07.044Z] Copying: 537/1024 [MB] (16 MBps) [2024-11-25T23:39:07.989Z] Copying: 555/1024 [MB] (18 MBps) [2024-11-25T23:39:08.933Z] Copying: 572/1024 [MB] (17 MBps) [2024-11-25T23:39:09.877Z] Copying: 588/1024 [MB] (15 MBps) [2024-11-25T23:39:10.823Z] Copying: 603/1024 [MB] (14 MBps) [2024-11-25T23:39:12.210Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-25T23:39:13.156Z] Copying: 637/1024 [MB] (22 MBps) [2024-11-25T23:39:14.102Z] Copying: 652/1024 [MB] (14 MBps) [2024-11-25T23:39:15.045Z] Copying: 671/1024 [MB] (19 MBps) [2024-11-25T23:39:15.990Z] Copying: 689/1024 [MB] (17 MBps) [2024-11-25T23:39:16.934Z] Copying: 708/1024 [MB] (19 MBps) [2024-11-25T23:39:17.875Z] Copying: 729/1024 [MB] (20 MBps) [2024-11-25T23:39:18.818Z] Copying: 745/1024 [MB] (16 MBps) [2024-11-25T23:39:20.204Z] Copying: 758/1024 [MB] (12 MBps) [2024-11-25T23:39:21.147Z] Copying: 771/1024 [MB] (12 MBps) [2024-11-25T23:39:22.132Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-25T23:39:23.120Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-25T23:39:24.067Z] Copying: 822/1024 [MB] (30 MBps) [2024-11-25T23:39:25.015Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-25T23:39:25.961Z] Copying: 844/1024 [MB] (10 MBps) [2024-11-25T23:39:26.912Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-25T23:39:27.859Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-25T23:39:28.806Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-25T23:39:30.212Z] Copying: 886/1024 [MB] (10 MBps) [2024-11-25T23:39:31.159Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-25T23:39:32.103Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-25T23:39:33.045Z] Copying: 924/1024 [MB] (15 MBps) [2024-11-25T23:39:33.991Z] Copying: 943/1024 [MB] (18 MBps) [2024-11-25T23:39:34.936Z] Copying: 958/1024 [MB] (14 MBps) [2024-11-25T23:39:35.880Z] Copying: 974/1024 [MB] (16 MBps) [2024-11-25T23:39:36.826Z] Copying: 987/1024 [MB] (13 MBps) [2024-11-25T23:39:38.221Z] Copying: 1002/1024 [MB] (14 MBps) [2024-11-25T23:39:38.221Z] Copying: 1017/1024 [MB] (14 MBps) [2024-11-25T23:39:38.221Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-25 23:39:38.156307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.852 [2024-11-25 23:39:38.156384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:05.852 [2024-11-25 23:39:38.156401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:05.852 [2024-11-25 23:39:38.156410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.852 [2024-11-25 23:39:38.156435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:05.852 [2024-11-25 23:39:38.159553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.852 [2024-11-25 23:39:38.159593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:05.852 [2024-11-25 23:39:38.159604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:38:05.852 [2024-11-25 23:39:38.159613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.852 [2024-11-25 23:39:38.159857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.852 [2024-11-25 23:39:38.159868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:05.852 [2024-11-25 23:39:38.159877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:38:05.852 [2024-11-25 23:39:38.159885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.852 [2024-11-25 23:39:38.159915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.852 [2024-11-25 23:39:38.159925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:38:05.852 [2024-11-25 23:39:38.159934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:05.852 [2024-11-25 23:39:38.159942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.853 [2024-11-25 23:39:38.160002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.853 [2024-11-25 23:39:38.160015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:38:05.853 [2024-11-25 23:39:38.160023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:38:05.853 [2024-11-25 23:39:38.160031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.853 [2024-11-25 23:39:38.160045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:05.853 [2024-11-25 23:39:38.160800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:38:05.853 [2024-11-25 23:39:38.160830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.160992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:05.853 [2024-11-25 23:39:38.161589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:05.854 [2024-11-25 23:39:38.161724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:05.854 [2024-11-25 23:39:38.161733] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c08bd9-98e2-4ce8-9a59-5099f5a41b5c 00:38:05.854 [2024-11-25 23:39:38.161741] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:38:05.854 [2024-11-25 23:39:38.161749] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:38:05.854 [2024-11-25 23:39:38.161757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:38:05.854 [2024-11-25 23:39:38.161766] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:38:05.854 [2024-11-25 23:39:38.161776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:05.854 [2024-11-25 23:39:38.161785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:05.854 [2024-11-25 23:39:38.161794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:05.854 [2024-11-25 23:39:38.161801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:05.854 [2024-11-25 23:39:38.161809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:05.854 [2024-11-25 23:39:38.161817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.854 [2024-11-25 23:39:38.161825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:05.854 [2024-11-25 23:39:38.161834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:38:05.854 [2024-11-25 23:39:38.161841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.854 [2024-11-25 23:39:38.175523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.854 [2024-11-25 23:39:38.175564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:05.854 [2024-11-25 23:39:38.175583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.659 ms 00:38:05.854 [2024-11-25 23:39:38.175592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.854 [2024-11-25 23:39:38.175984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.854 [2024-11-25 23:39:38.176001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:05.854 [2024-11-25 23:39:38.176011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:38:05.854 [2024-11-25 23:39:38.176019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.854 [2024-11-25 23:39:38.213269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.854 [2024-11-25 23:39:38.213309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:05.854 [2024-11-25 23:39:38.213321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.854 [2024-11-25 23:39:38.213330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.854 [2024-11-25 23:39:38.213405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.854 [2024-11-25 23:39:38.213417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:05.854 [2024-11-25 23:39:38.213427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.854 [2024-11-25 23:39:38.213436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.854 [2024-11-25 23:39:38.213500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.854 [2024-11-25 23:39:38.213516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:05.854 [2024-11-25 23:39:38.213525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.854 [2024-11-25 23:39:38.213535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.854 [2024-11-25 23:39:38.213553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.854 [2024-11-25 23:39:38.213563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:05.854 [2024-11-25 23:39:38.213572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.854 [2024-11-25 23:39:38.213581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.297705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.297759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:06.115 [2024-11-25 23:39:38.297772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.297781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.366681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.366737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:06.115 [2024-11-25 23:39:38.366749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.366759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.366844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.366854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:06.115 [2024-11-25 23:39:38.366869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.366878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.366917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.366926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:06.115 [2024-11-25 23:39:38.366934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.366943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.367022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.367032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:06.115 [2024-11-25 23:39:38.367041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.367052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.367108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.367118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:06.115 [2024-11-25 23:39:38.367126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.367135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.367176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.367187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:06.115 [2024-11-25 23:39:38.367196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.367204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.367253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:06.115 [2024-11-25 23:39:38.367272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:06.115 [2024-11-25 23:39:38.367281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:06.115 [2024-11-25 23:39:38.367290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:06.115 [2024-11-25 23:39:38.367430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.085 ms, result 0 00:38:07.058 00:38:07.058 00:38:07.058 23:39:39 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:09.607 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84978 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84978 ']' 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84978 00:38:09.607 Process with pid 84978 is not found 00:38:09.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84978) - No such process 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84978 is not found' 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:38:09.607 Remove shared memory files 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_band_md /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_l2p_l1 /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_l2p_l2 /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_l2p_l2_ctx /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_nvc_md /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_p2l_pool /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_sb /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_sb_shm /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_trim_bitmap /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_trim_log /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_trim_md /dev/hugepages/ftl_85c08bd9-98e2-4ce8-9a59-5099f5a41b5c_vmap 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:38:09.607 00:38:09.607 real 5m28.512s 00:38:09.607 user 5m16.501s 00:38:09.607 sys 0m11.724s 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.607 ************************************ 00:38:09.607 END TEST ftl_restore_fast 00:38:09.607 ************************************ 00:38:09.607 23:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:38:09.607 23:39:41 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:38:09.607 23:39:41 ftl -- ftl/ftl.sh@14 -- # killprocess 74996 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 74996 ']' 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@958 -- # kill -0 74996 00:38:09.607 Process with pid 74996 is not found 00:38:09.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74996) - No such process 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74996 is not found' 00:38:09.607 23:39:41 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:38:09.607 23:39:41 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88264 00:38:09.607 23:39:41 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88264 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@835 -- # '[' -z 88264 ']' 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.607 23:39:41 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.607 23:39:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:09.607 [2024-11-25 23:39:41.665022] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.03.0 initialization... 00:38:09.607 [2024-11-25 23:39:41.665188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88264 ] 00:38:09.607 [2024-11-25 23:39:41.828820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.607 [2024-11-25 23:39:41.955690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.551 23:39:42 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.551 23:39:42 ftl -- common/autotest_common.sh@868 -- # return 0 00:38:10.551 23:39:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:10.551 nvme0n1 00:38:10.813 23:39:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:38:10.813 23:39:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:10.813 23:39:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:10.813 23:39:43 ftl -- ftl/common.sh@28 -- # stores=550552f7-c00f-45e1-9214-99def0d1bd3e 00:38:10.813 23:39:43 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:38:10.813 23:39:43 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 550552f7-c00f-45e1-9214-99def0d1bd3e 00:38:11.074 23:39:43 ftl -- ftl/ftl.sh@23 -- # killprocess 88264 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@954 -- # '[' -z 88264 ']' 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@958 -- # kill -0 88264 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@959 -- # uname 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88264 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:11.074 killing process with pid 88264 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88264' 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@973 -- # kill 88264 00:38:11.074 23:39:43 ftl -- common/autotest_common.sh@978 -- # wait 88264 00:38:12.989 23:39:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:12.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:12.989 Waiting for block devices as requested 00:38:12.989 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:12.989 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:12.989 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:13.250 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:18.538 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:18.538 23:39:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:38:18.538 Remove shared memory files 00:38:18.538 23:39:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:18.538 23:39:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:38:18.538 23:39:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:38:18.538 23:39:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:38:18.538 23:39:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:18.538 23:39:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:38:18.538 00:38:18.538 real 20m21.365s 00:38:18.538 user 22m9.679s 00:38:18.538 sys 1m33.627s 00:38:18.538 ************************************ 00:38:18.538 END TEST ftl 00:38:18.538 ************************************ 00:38:18.538 23:39:50 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.538 23:39:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:18.538 23:39:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:18.538 23:39:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:18.538 23:39:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:18.538 23:39:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:18.538 23:39:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:18.538 23:39:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:18.538 23:39:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:18.538 23:39:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:18.538 23:39:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:18.538 23:39:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:18.538 23:39:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.538 23:39:50 -- common/autotest_common.sh@10 -- # set +x 00:38:18.538 23:39:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:18.538 23:39:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:18.538 23:39:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:18.538 23:39:50 -- common/autotest_common.sh@10 -- # set +x 00:38:19.483 INFO: APP EXITING 00:38:19.483 INFO: killing all VMs 00:38:19.483 INFO: killing vhost app 00:38:19.483 INFO: EXIT DONE 00:38:20.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:20.320 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:20.320 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:20.320 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:20.320 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:20.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:21.155 Cleaning 00:38:21.156 Removing: /var/run/dpdk/spdk0/config 00:38:21.156 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:21.156 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:21.156 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:21.156 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:21.156 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:21.156 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:21.156 Removing: /var/run/dpdk/spdk0 00:38:21.156 Removing: /var/run/dpdk/spdk_pid56907 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57109 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57322 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57415 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57454 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57577 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57595 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57787 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57883 00:38:21.156 Removing: /var/run/dpdk/spdk_pid57974 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58079 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58171 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58210 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58247 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58317 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58423 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58854 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58918 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58970 00:38:21.156 Removing: /var/run/dpdk/spdk_pid58986 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59077 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59093 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59195 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59210 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59264 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59282 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59335 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59353 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59508 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59544 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59628 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59800 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59884 00:38:21.156 Removing: /var/run/dpdk/spdk_pid59920 00:38:21.156 Removing: /var/run/dpdk/spdk_pid60359 00:38:21.156 Removing: /var/run/dpdk/spdk_pid60457 00:38:21.156 Removing: /var/run/dpdk/spdk_pid60566 00:38:21.156 Removing: /var/run/dpdk/spdk_pid60621 00:38:21.156 Removing: /var/run/dpdk/spdk_pid60641 00:38:21.156 Removing: /var/run/dpdk/spdk_pid60725 00:38:21.156 Removing: /var/run/dpdk/spdk_pid61350 00:38:21.156 Removing: /var/run/dpdk/spdk_pid61386 00:38:21.156 Removing: /var/run/dpdk/spdk_pid61872 00:38:21.156 Removing: /var/run/dpdk/spdk_pid61971 00:38:21.156 Removing: /var/run/dpdk/spdk_pid62091 00:38:21.156 Removing: /var/run/dpdk/spdk_pid62139 00:38:21.156 Removing: /var/run/dpdk/spdk_pid62164 00:38:21.156 Removing: /var/run/dpdk/spdk_pid62195 00:38:21.156 Removing: /var/run/dpdk/spdk_pid64054 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64186 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64190 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64207 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64247 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64251 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64263 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64308 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64312 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64324 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64363 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64373 00:38:21.418 Removing: /var/run/dpdk/spdk_pid64385 00:38:21.418 Removing: /var/run/dpdk/spdk_pid65772 00:38:21.418 Removing: /var/run/dpdk/spdk_pid65865 00:38:21.418 Removing: /var/run/dpdk/spdk_pid67267 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69026 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69093 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69165 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69276 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69367 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69466 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69540 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69615 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69726 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69812 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69908 00:38:21.418 Removing: /var/run/dpdk/spdk_pid69982 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70052 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70156 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70252 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70349 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70412 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70492 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70597 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70689 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70784 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70853 00:38:21.418 Removing: /var/run/dpdk/spdk_pid70932 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71007 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71081 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71179 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71274 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71364 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71433 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71509 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71589 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71659 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71762 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71847 00:38:21.418 Removing: /var/run/dpdk/spdk_pid71995 00:38:21.418 Removing: /var/run/dpdk/spdk_pid72275 00:38:21.418 Removing: /var/run/dpdk/spdk_pid72317 00:38:21.418 Removing: /var/run/dpdk/spdk_pid72763 00:38:21.418 Removing: /var/run/dpdk/spdk_pid72949 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73042 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73152 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73200 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73225 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73512 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73567 00:38:21.418 Removing: /var/run/dpdk/spdk_pid73637 00:38:21.418 Removing: /var/run/dpdk/spdk_pid74040 00:38:21.418 Removing: /var/run/dpdk/spdk_pid74190 00:38:21.418 Removing: /var/run/dpdk/spdk_pid74996 00:38:21.418 Removing: /var/run/dpdk/spdk_pid75124 00:38:21.418 Removing: /var/run/dpdk/spdk_pid75293 00:38:21.418 Removing: /var/run/dpdk/spdk_pid75385 00:38:21.419 Removing: /var/run/dpdk/spdk_pid75703 00:38:21.419 Removing: /var/run/dpdk/spdk_pid76003 00:38:21.419 Removing: /var/run/dpdk/spdk_pid76347 00:38:21.419 Removing: /var/run/dpdk/spdk_pid76529 00:38:21.419 Removing: /var/run/dpdk/spdk_pid76759 00:38:21.419 Removing: /var/run/dpdk/spdk_pid76806 00:38:21.419 Removing: /var/run/dpdk/spdk_pid77039 00:38:21.419 Removing: /var/run/dpdk/spdk_pid77068 00:38:21.419 Removing: /var/run/dpdk/spdk_pid77116 00:38:21.419 Removing: /var/run/dpdk/spdk_pid77436 00:38:21.419 Removing: /var/run/dpdk/spdk_pid77668 00:38:21.419 Removing: /var/run/dpdk/spdk_pid78410 00:38:21.419 Removing: /var/run/dpdk/spdk_pid79293 00:38:21.419 Removing: /var/run/dpdk/spdk_pid80246 00:38:21.419 Removing: /var/run/dpdk/spdk_pid81271 00:38:21.419 Removing: /var/run/dpdk/spdk_pid81413 00:38:21.419 Removing: /var/run/dpdk/spdk_pid81497 00:38:21.419 Removing: /var/run/dpdk/spdk_pid81865 00:38:21.419 Removing: /var/run/dpdk/spdk_pid81922 00:38:21.419 Removing: /var/run/dpdk/spdk_pid82719 00:38:21.419 Removing: /var/run/dpdk/spdk_pid83150 00:38:21.419 Removing: /var/run/dpdk/spdk_pid83926 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84048 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84094 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84148 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84205 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84258 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84473 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84554 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84621 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84688 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84727 00:38:21.419 Removing: /var/run/dpdk/spdk_pid84784 00:38:21.680 Removing: /var/run/dpdk/spdk_pid84978 00:38:21.680 Removing: /var/run/dpdk/spdk_pid85199 00:38:21.680 Removing: /var/run/dpdk/spdk_pid86069 00:38:21.680 Removing: /var/run/dpdk/spdk_pid86933 00:38:21.680 Removing: /var/run/dpdk/spdk_pid87500 00:38:21.680 Removing: /var/run/dpdk/spdk_pid88264 00:38:21.680 Clean 00:38:21.680 23:39:53 -- common/autotest_common.sh@1453 -- # return 0 00:38:21.680 23:39:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:21.680 23:39:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:21.680 23:39:53 -- common/autotest_common.sh@10 -- # set +x 00:38:21.680 23:39:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:21.680 23:39:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:21.680 23:39:53 -- common/autotest_common.sh@10 -- # set +x 00:38:21.680 23:39:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:21.680 23:39:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:21.680 23:39:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:21.680 23:39:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:21.680 23:39:53 -- spdk/autotest.sh@398 -- # hostname 00:38:21.680 23:39:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:21.942 geninfo: WARNING: invalid characters removed from testname! 00:38:48.606 23:40:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:49.543 23:40:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:52.093 23:40:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:55.402 23:40:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:57.956 23:40:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:59.867 23:40:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:03.170 23:40:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:03.170 23:40:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:03.170 23:40:34 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:39:03.170 23:40:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:03.170 23:40:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:03.171 23:40:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:03.171 + [[ -n 5029 ]] 00:39:03.171 + sudo kill 5029 00:39:03.182 [Pipeline] } 00:39:03.198 [Pipeline] // timeout 00:39:03.204 [Pipeline] } 00:39:03.219 [Pipeline] // stage 00:39:03.224 [Pipeline] } 00:39:03.239 [Pipeline] // catchError 00:39:03.249 [Pipeline] stage 00:39:03.252 [Pipeline] { (Stop VM) 00:39:03.266 [Pipeline] sh 00:39:03.551 + vagrant halt 00:39:06.100 ==> default: Halting domain... 00:39:11.403 [Pipeline] sh 00:39:11.688 + vagrant destroy -f 00:39:14.238 ==> default: Removing domain... 00:39:15.195 [Pipeline] sh 00:39:15.477 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:39:15.488 [Pipeline] } 00:39:15.503 [Pipeline] // stage 00:39:15.508 [Pipeline] } 00:39:15.522 [Pipeline] // dir 00:39:15.527 [Pipeline] } 00:39:15.541 [Pipeline] // wrap 00:39:15.550 [Pipeline] } 00:39:15.563 [Pipeline] // catchError 00:39:15.573 [Pipeline] stage 00:39:15.575 [Pipeline] { (Epilogue) 00:39:15.588 [Pipeline] sh 00:39:15.873 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:21.258 [Pipeline] catchError 00:39:21.260 [Pipeline] { 00:39:21.274 [Pipeline] sh 00:39:21.560 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:21.560 Artifacts sizes are good 00:39:21.569 [Pipeline] } 00:39:21.584 [Pipeline] // catchError 00:39:21.595 [Pipeline] archiveArtifacts 00:39:21.602 Archiving artifacts 00:39:21.735 [Pipeline] cleanWs 00:39:21.746 [WS-CLEANUP] Deleting project workspace... 00:39:21.747 [WS-CLEANUP] Deferred wipeout is used... 00:39:21.755 [WS-CLEANUP] done 00:39:21.756 [Pipeline] } 00:39:21.772 [Pipeline] // stage 00:39:21.777 [Pipeline] } 00:39:21.790 [Pipeline] // node 00:39:21.795 [Pipeline] End of Pipeline 00:39:21.827 Finished: SUCCESS